Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow performance in arrayUtil functions with variable length data #188

Open
jreadey opened this issue Oct 19, 2022 · 0 comments
Open

Slow performance in arrayUtil functions with variable length data #188

jreadey opened this issue Oct 19, 2022 · 0 comments
Assignees

Comments

@jreadey
Copy link
Member

jreadey commented Oct 19, 2022

The functions that deal with converting numpy arrays of variable length elements to a buffer and back can be quite slow.
Running the test program: https://github.com/HDFGroup/hsds/blob/master/tests/perf/arrayperf/bytes_to_array.py with a million element array gave this output:

$ python bytes_to_array.py 
getByteArraySize - elapsed: 0.3334 for 1000000 elements, returned 7888327
arrayToBytes - elapsed: 3.1166 for 1000000 elements
bytesToArray - elapsed: 1.1793

Not surprising since it's iterating over each element in a loop.

Looked into using numba, but numba doesn't work with numpy arrays of object type.
Cython version of arrayUtil?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants