Comments (2)
Thanks for raising this. Interesting examples.
In short, some of these effects can be handled if needed, others are due to numeric instabilities that are part of the package I'm afraid, and one part is a bug.
For a longer explanation, it is worth to keep in mind that the umap function performs three main steps: it computes a set of nearest neighbors for each data point, produces an initial layout for the data points, and optimizes that layout according to the umap recipe.
With regard to the comparisons that you are proposing (embeddings from raw data or from a pre-computed distance matrix), there are two points to be aware of.
-
For such comparisons, only use small datasets. When you supply a distance matrix as input, the nearest neighbors will be computed exactly from that distance matrix. When the input is raw data, the nearest neighbors will be computed exactly when the data is small, so it is reasonable to seek equivalence. But for large datasets (>2048 rows), the neighbors will be computed with an approximate algorithm, so some details are bound to be slightly different and everything downstream will shift as well.
-
When converting from correlations to distances/dissimilarities, use
data_dist = (1- data_cor)
, i.e. without the extra scaling by 2.
With those two things out of the way, let's produce embeddings from raw data and from a distance matrix. Let's use synthetic data with two clusters.
# small dataset: 100 points, 4 features
small <- matrix(rnorm(400), ncol=4)
# let's have 2 noisy clusters
small[, 1] = c(rnorm(50, -2), rnorm(50, 2))
small_dist = 1 - cor(t(small), method="pearson")
result_pearson = umap(small, metric="pearson", random_state=123)
result_dist = umap(small_dist, input="dist", random_state=123)
Components knn
summarize the nearest neighbors, and we can compare those.
identical(result_pearson$knn$indexes, result_dist$knn$indexes)
identical(result_pearson$knn$distances, result_dist$knn$distances)
The first comparison should give TRUE, i.e. the nearest neighbors are exactly the same. The second comparison will likely give FALSE. Inspection will reveal that the discrepancies in distances are actually small in absolute terms. Those are float-precision discrepancies.
Next, we can track how the discrepancies propagate into the layout optimization.
layouts <- cbind(result_pearson$layout, result_dist$layout, NA)
xlim <- range(layouts[, c(1, 3)])
ylim <- range(layouts[, c(2, 4)])
plot(xlim, ylim, type="n", axes=FALSE, xlab="", ylab="", frame=TRUE)
lines(as.vector(t(layouts[, c(1, 3, 5)])),
as.vector(t(layouts[, c(2, 4, NA)])),
col="gray", lwd=2)
points(layouts[, 1:2], pch=19, col="red", cex=2)
points(layouts[, 3:4], pch=19, col="blue", cex=2)
This should display two super-imposed embeddings, one with red dots and the other with blue dots, with lines connecting matching items.
Add a loop around this whole process, and we can compare several data matrices and embeddings.
Yes, it appears the layouts can change as a result of the initial discrepancies in distances. But note that the "big picture" remains similar, i.e. the examples show separation between the two clusters. Changes seem to be translations or twists, so there is some consistency within the local structure as well, even if the exact coordinates are shifted about. Overall, this is not ideal but it is not a fatal flaw. After all, similar shifts can appear if you change the seed for random number generation.
Note it is possible to lessen the effect by reducing the learning rate parameter alpha
.
Your last question was comparing your res.umap2
and res.umap3
. You found a bug there, so thanks for pointing it out. Possible solutions - from the package perspective - would be to ignore metric="pearson"
when input="dist"
, or to raise an error and ask the user to correct the settings. You're welcome to submit a PR if you like. Until a permanent fix, just don't use metric="pearson"
together with input="dist"
.
Hope this helps!
from umap.
Thank you so much for the detailed response. I really appreciate it :)
from umap.
Related Issues (20)
- predict() on a umap object with n_components=1 gets two errors -- Looks like missing drop=F HOT 2
- Failed creating initial embedding; using random embedding instead HOT 3
- Intel MKL FATAL ERROR HOT 3
- Differences with Python version? HOT 4
- Add support for umap-learn 0.5 HOT 4
- Sparse Matrix support HOT 4
- is there any spark version implementations? HOT 2
- missing value where TRUE/FALSE needed HOT 3
- umap() produces matrix instead of S3 object HOT 2
- method = "python" does not work HOT 1
- when random_state is set automatically in config, it is not sufficient for reproducibility HOT 1
- Citing the package HOT 1
- Type error in optimize_embedding HOT 3
- transforming new data to an embedding HOT 3
- Error with n_components=1 HOT 3
- Number of threads HOT 3
- Allow for supervised/semi-supervised dimension reduction with labels HOT 1
- min_dist not updating with Python backend HOT 3
- predict() generates different predictions if called with multiple points at once versus called with each point individually HOT 7
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from umap.