Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Outlier scores - possible bug in GLOSH computation #628

Open
azizkayumov opened this issue Mar 20, 2024 · 0 comments
Open

Outlier scores - possible bug in GLOSH computation #628

azizkayumov opened this issue Mar 20, 2024 · 0 comments

Comments

@azizkayumov
Copy link

azizkayumov commented Mar 20, 2024

I am curious if GLOSH implementation in this repository correctly follows the paper's definition of "outlierness".
According to HDBSCAN* paper (R. J. G. B. Campello et al. 2015, page 25):

In order to compute GLOSH in Equation (8), one needs only the first (last) cluster to which object xi belongs bottom-up (top-down) through the hierarchy, the lowest radius at which xi still belongs to this cluster (and below which xi is labeled as noise), ε(xi), and the lowest radius at which this cluster or any of its subclusters still exist (and below which all its objects are labeled as noise), εmax(xi).

Looking at the max_lambdas function for computing εmax(xi), I think the original paper's explanation (the bold italic text above) is not correctly interpreted. It seems the max_lambdas function only considers the death of a parent cluster (not considering the latest death of its subclusters).

To reproduce this issue, please run the following code:

import hdbscan
import numpy as np
import matplotlib.pyplot as plt


# Step 1. Generate 3 clusters of random data and some uniform noise
data = []
np.random.seed(1)
for i in range(3):
    data.extend(np.random.randn(100, 2) * 0.5 + np.random.randn(1, 2) * 3)
data.extend(np.random.rand(100, 2) * 20 - 10)

# Step 2. Cluster the data
k = 15
clusterer = hdbscan.HDBSCAN(alpha=1.0, approx_min_span_tree=False,
    gen_min_span_tree=True,
    metric='euclidean', min_cluster_size=k, min_samples=k, match_reference_implementation=True)
clusterer.fit(data)

# Step 3. Plot the outlier scores
outlier_scores = clusterer.outlier_scores_
plt.scatter([x[0] for x in data], [x[1] for x in data], s=25, c=outlier_scores, cmap='viridis')
plt.colorbar()
plt.title("Outlier scores")
plt.show()

This should show the following plot:

Figure_1

As you can see from the plot, the outlier scores assigned to the data points between clusters (please find the yellow points between the clusters!) do not seem to look "natural" outliers compared to other outliers. From my own understanding of the paper, we want the far away outlier points to have higher scores (just like looking at a topographical map) and the points between clusters to have lower outlier scores.
I think GLOSH is supposed to give us this instead:

clusterable2_fixed

It seems a fix of GLOSH may also be of help for #116. I would like to PR but I seem to have issues with building the code for now. Please let me know if there is something I might be missing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant