Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving FisherJenks #118

Closed
cheginit opened this issue Dec 7, 2021 · 8 comments · May be fixed by #201
Closed

Improving FisherJenks #118

cheginit opened this issue Dec 7, 2021 · 8 comments · May be fixed by #201

Comments

@cheginit
Copy link

cheginit commented Dec 7, 2021

The implemented FisherJenks classifier is very slow. I would like to suggest using jenkspy instead. It's written in cython and it's very fast.

@ljwolf
Copy link
Member

ljwolf commented Dec 7, 2021

Do you happen to have numba installed? Our FisherJenks implementation should be accelerated using that... we'd definitely invite a rigorous performance comparison there 😄

Also, the *Sampled variants are useful in extremely large data, since a random subsample of the target data can often yield the same results.

Do either of these work for your use case?

@martinfleis
Copy link
Member

@ljwolf shall we maybe add a warning here

from numba import jit
when Numba isn't loaded?

@cheginit
Copy link
Author

cheginit commented Dec 7, 2021

Ah, I didn't notice there's numba support. I just gave it a try and compared it with jenkspy. I noticed that you're just simply using jit without type specification. So I replaced @jit with @njit("f8[:](f8[:], u2, b1)") and compared the three versions with the following data:

import jenkspy
from mapclassify.classifiers import _fisher_jenks_means
import numpy as np

data = np.random.random(12000) * 5000
data = data.astype("f8")

Here are the results.

image

@cheginit
Copy link
Author

cheginit commented Dec 7, 2021

I ran the same test using an array with 120,000 elements instead of 12,000. Here's the results:
image

BTW, I ran these tests on my iMac with 8 cores and 16 G RAM.

@martinfleis
Copy link
Member

That is interesting!

In any case, I would rather not add jenskpy as a dependency here. We don't know how maintained it is and it may be a hassle to install. They offer wheels only for Windows and only up to Python 3.8. I assume that this could cause a lot of friction for users.

@cheginit
Copy link
Author

cheginit commented Dec 9, 2021

Right, it makes sense.

You can shave off a few more seconds by adding cache to njit like so @njit("f8[:](f8[:], u2, b1)", cache=True). Feel free to close or if interested, I can modify the code to add the signatures and submit a PR.

@martinfleis
Copy link
Member

I can modify the code to add the signatures and submit a PR

That would be very welcome! Thanks!

cheginit pushed a commit to cheginit/mapclassify that referenced this issue Dec 10, 2021
@acikmese
Copy link

I would definitely not integrate jenkspy in your environment. Your library works perfectly in my tests, but jenkspy have some problematic outputs. I don't trust to outputs of jenkspy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants