Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with using custom metric #18

Open
Jayesh-Kumar-Sundaram opened this issue Jul 8, 2022 · 2 comments
Open

Problem with using custom metric #18

Jayesh-Kumar-Sundaram opened this issue Jul 8, 2022 · 2 comments

Comments

@Jayesh-Kumar-Sundaram
Copy link

Jayesh-Kumar-Sundaram commented Jul 8, 2022

Hello, I am trying to run UMAP with pre-computed "custom metric" as input distance matrix. My custom metric is Pearson distance. I know that there is an in built custom metric - "Pearson" available. But, I wanted to check whether the results match if I use pre-computed Pearson distance as the input distance matrix to the umap() function. Even after setting the random_state the same in both the cases, I got different results.

Case 1: (Using the in-built Pearson metric)
inp_n_neighbors <- 200
inp_min_dist <- 0.001
inp_spread <- 0.2
n_comp <- 2
custom.config <- umap.defaults
custom.config$random_state <- 123
custom.config$n_neighbors <- inp_n_neighbors
custom.config$min_dist <- inp_min_dist
custom.config$spread <- inp_spread
custom.config$metric <- "pearson"
custom.config$n_components <- n_comp
res.umap <- umap(data, config=custom.config, preserve.seed=TRUE)

Case 2: (Using the custom Pearson metric as input distance matrix)
inp_n_neighbors <- 200
inp_min_dist <- 0.001
inp_spread <- 0.2
n_comp <- 2
custom.config <- umap.defaults
custom.config$random_state <- 123
custom.config$input <- "dist"
custom.config$n_neighbors <- inp_n_neighbors
custom.config$min_dist <- inp_min_dist
custom.config$spread <- inp_spread
custom.config$n_components <- n_comp
data_corr <- cor(t(data), method="pearson")
data_dist <- (1 - data_corr)/2
res.umap2<- umap(data_dist, config=custom.config, preserve.seed=TRUE)

The results of res.umap and res.umap2 are different

I was curious to know what is happening and played around with things and realized that even with pre-computed custom distance metric as input, the value assigned to "custom.config$metric" parameter changes the results. For example, look at the case 3.

Case 3: (Using the custom Pearson metric as input distance matrix)
inp_n_neighbors <- 200
inp_min_dist <- 0.001
inp_spread <- 0.2
n_comp <- 2
custom.config <- umap.defaults
custom.config$random_state <- 123
custom.config$input <- "dist"
custom.config$n_neighbors <- inp_n_neighbors
custom.config$min_dist <- inp_min_dist
custom.config$spread <- inp_spread
custom.config$n_components <- n_comp
custom.config$metric <- "pearson" #### THE DEFAULT IS EUCLIDEAN DISTANCE BUT I CHANGED IT TO PEARSON"
data_corr <- cor(t(data), method="pearson")
data_dist <- (1 - data_corr)/2
res.umap3<- umap(data_dist, config=custom.config, preserve.seed=TRUE)

The results of res.umap2 and res.umap3 are different

WHEN I USE A PRE-COMPUTED CUSTOM METRIC AS INPUT DISTANCE, WHY THE VAULE ASSIGNED TO "custom.config$metric" CHANGES THE RESULTS? WHERE IS THE PROBLEM WITH MY UNDERSTANDING?

Thanks

@tkonopka
Copy link
Owner

tkonopka commented Jul 9, 2022

Thanks for raising this. Interesting examples.

In short, some of these effects can be handled if needed, others are due to numeric instabilities that are part of the package I'm afraid, and one part is a bug.

For a longer explanation, it is worth to keep in mind that the umap function performs three main steps: it computes a set of nearest neighbors for each data point, produces an initial layout for the data points, and optimizes that layout according to the umap recipe.

With regard to the comparisons that you are proposing (embeddings from raw data or from a pre-computed distance matrix), there are two points to be aware of.

  • For such comparisons, only use small datasets. When you supply a distance matrix as input, the nearest neighbors will be computed exactly from that distance matrix. When the input is raw data, the nearest neighbors will be computed exactly when the data is small, so it is reasonable to seek equivalence. But for large datasets (>2048 rows), the neighbors will be computed with an approximate algorithm, so some details are bound to be slightly different and everything downstream will shift as well.

  • When converting from correlations to distances/dissimilarities, use data_dist = (1- data_cor), i.e. without the extra scaling by 2.

With those two things out of the way, let's produce embeddings from raw data and from a distance matrix. Let's use synthetic data with two clusters.

# small dataset: 100 points, 4 features
small <- matrix(rnorm(400), ncol=4)   
# let's have 2 noisy clusters
small[, 1] = c(rnorm(50, -2), rnorm(50, 2)) 
small_dist = 1 - cor(t(small), method="pearson")
result_pearson = umap(small, metric="pearson", random_state=123)
result_dist = umap(small_dist, input="dist", random_state=123)

Components knn summarize the nearest neighbors, and we can compare those.

identical(result_pearson$knn$indexes, result_dist$knn$indexes) 
identical(result_pearson$knn$distances, result_dist$knn$distances) 

The first comparison should give TRUE, i.e. the nearest neighbors are exactly the same. The second comparison will likely give FALSE. Inspection will reveal that the discrepancies in distances are actually small in absolute terms. Those are float-precision discrepancies.

Next, we can track how the discrepancies propagate into the layout optimization.

layouts <- cbind(result_pearson$layout, result_dist$layout, NA)
xlim <- range(layouts[, c(1, 3)])
ylim <- range(layouts[, c(2, 4)])
plot(xlim, ylim, type="n", axes=FALSE, xlab="", ylab="", frame=TRUE)
lines(as.vector(t(layouts[, c(1, 3, 5)])), 
      as.vector(t(layouts[, c(2, 4, NA)])), 
      col="gray", lwd=2)  
points(layouts[, 1:2], pch=19, col="red", cex=2)
points(layouts[, 3:4], pch=19, col="blue", cex=2)

This should display two super-imposed embeddings, one with red dots and the other with blue dots, with lines connecting matching items.

Add a loop around this whole process, and we can compare several data matrices and embeddings.

umap-issue-18

Yes, it appears the layouts can change as a result of the initial discrepancies in distances. But note that the "big picture" remains similar, i.e. the examples show separation between the two clusters. Changes seem to be translations or twists, so there is some consistency within the local structure as well, even if the exact coordinates are shifted about. Overall, this is not ideal but it is not a fatal flaw. After all, similar shifts can appear if you change the seed for random number generation.

Note it is possible to lessen the effect by reducing the learning rate parameter alpha.

Your last question was comparing your res.umap2 and res.umap3. You found a bug there, so thanks for pointing it out. Possible solutions - from the package perspective - would be to ignore metric="pearson" when input="dist", or to raise an error and ask the user to correct the settings. You're welcome to submit a PR if you like. Until a permanent fix, just don't use metric="pearson" together with input="dist".

Hope this helps!

@Jayesh-Kumar-Sundaram
Copy link
Author

Thank you so much for the detailed response. I really appreciate it :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants