Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UMAP n_neighbors must be greater than 1 #30

Open
jeffreyzhanghc opened this issue Apr 6, 2024 · 11 comments
Open

UMAP n_neighbors must be greater than 1 #30

jeffreyzhanghc opened this issue Apr 6, 2024 · 11 comments

Comments

@jeffreyzhanghc
Copy link

Hi team, currently I am building with raptor to achieve the open-domain QA as following:
we have data stored as question-answer pair, and when user have a input query, I try to match the query with top-k most related questions asked in my data and concatenate their answer, and then use raptor to try to get a answer for the input query, but when the length of docs in RA.add_documents(docs) gets longer, it gives me "n_neighbors must be greater than 1" error for UMAP part at fit transform in this code chunk:
def global_cluster_embeddings(
embeddings: np.ndarray,
dim: int,
n_neighbors: Optional[int] = None,
metric: str = "cosine",
) -> np.ndarray:
if n_neighbors is None:
n_neighbors = int((len(embeddings) - 1) ** 0.5)
reduced_embeddings = umap.UMAP(
n_neighbors=n_neighbors, n_components=dim, metric=metric
).fit_transform(embeddings)
return reduced_embeddings
Is there any way to resolve UMAP issue in this case?

@cuichenxu
Copy link

Got the same case, have you solved it?

@fatlism
Copy link

fatlism commented Apr 10, 2024

I also encountered the same problem, is there any solution?

@isConic
Copy link

isConic commented Apr 10, 2024

@jeffreyzhanghc
can you pinpoint where in the repo this line of code is?

@jeffreyzhanghc
Copy link
Author

@cuichenxu @fatlism Hi, I have not totally understand the case yet, but my initial guess will be during the embedding process I use the original raptor model to train Chinese content, which in longer context yield to this bug very often, yet when I customize my embedding/summarization model for Chinese, this does not shows up for a while. My suggestion will be, if you are training longer text in different language, you might consider try a customized embedding methods specifically for that language, but I am not sure if that can solve the issue

@jeffreyzhanghc
Copy link
Author

@jeffreyzhanghc can you pinpoint where in the repo this line of code is?

it is under raptor/cluster_utils.py, line 33

@jeffreyzhanghc
Copy link
Author

@jeffreyzhanghc can you pinpoint where in the repo this line of code is?

and for the umap package it is in umap_.py line 2379 in .fit, and lead to error from line 1777 from _validate_parameters()

@cuichenxu
Copy link

@cuichenxu @fatlism Hi, I have not totally understand the case yet, but my initial guess will be during the embedding process I use the original raptor model to train Chinese content, which in longer context yield to this bug very often, yet when I customize my embedding/summarization model for Chinese, this does not shows up for a while. My suggestion will be, if you are training longer text in different language, you might consider try a customized embedding methods specifically for that language, but I am not sure if that can solve the issue

Hi, thanks for your insights!
I just use texts that include English only. And the embedding model is SBertEmbeddingModel in raptor/EmbeddingModels.py, and it still suffer this, I really do not understand why.

By the way, can you run this to satisfy your aims successfully? Could you please share your custom embedding model code? I tried to implement one, but an error occurred.....

@fatlism
Copy link

fatlism commented Apr 11, 2024

if n_neighbors is None:
        # n_neighbors = int((len(embeddings) - 1) ** 0.5)
        n_neighbors = max(2, int((len(embeddings) - 1) ** 0.5))

I found that the length of the aggregated vector array is 2. This error will only occur if dimensionality reduction is called, because the default parameter n_neighbors value is not set. I temporarily solved it through the above code.

@cuichenxu
Copy link

if n_neighbors is None:
        # n_neighbors = int((len(embeddings) - 1) ** 0.5)
        n_neighbors = max(2, int((len(embeddings) - 1) ** 0.5))

I found that the length of the aggregated vector array is 2. This error will only occur if dimensionality reduction is called, because the default parameter n_neighbors value is not set. I temporarily solved it through the above code.

How long does it take when the context is long?

@fatlism
Copy link

fatlism commented Apr 12, 2024

if n_neighbors is None:
        # n_neighbors = int((len(embeddings) - 1) ** 0.5)
        n_neighbors = max(2, int((len(embeddings) - 1) ** 0.5))

I found that the length of the aggregated vector array is 2. This error will only occur if dimensionality reduction is called, because the default parameter n_neighbors value is not set. I temporarily solved it through the above code.

How long does it take when the context is long?

A single-threaded execution might take several hours.

@lixinze777
Copy link

if n_neighbors is None:
        # n_neighbors = int((len(embeddings) - 1) ** 0.5)
        n_neighbors = max(2, int((len(embeddings) - 1) ** 0.5))

I found that the length of the aggregated vector array is 2. This error will only occur if dimensionality reduction is called, because the default parameter n_neighbors value is not set. I temporarily solved it through the above code.

I tried this solution and this is what i got:

File "/home/miniconda3/envs/lib/python3.8/site-packages/scipy/sparse/linalg/_eigen/arpack/arpack.py", line 1605, in eigsh
raise TypeError("Cannot use scipy.linalg.eigh for sparse A with "
TypeError: Cannot use scipy.linalg.eigh for sparse A with k >= N. Use scipy.linalg.eigh(A.toarray()) or reduce k.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants