Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement ndarray.byteswap #9422

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open

Implement ndarray.byteswap #9422

wants to merge 5 commits into from

Conversation

bmerry
Copy link
Contributor

@bmerry bmerry commented Feb 4, 2024

It's implemented for numbers (including complex) and boolean.

On a 1D array it's about 7x faster than numpy. For higher dimensions it is slower, and it is much slower for Fortran-order arrays, for the same reasons as in #9400.

This is almost my first PR for numba and I'm pretty new to LLVM as well, so I've got a few questions:

  • I've kept the code smaller by unconditionally bit-casting to and from a suitably-sized integer type. If it's already an integer type (probably the common case) that's unnecessary LLVM IR instructions, but I'm assuming that's harmless.
  • In numpy, byteswap on a scalar with inplace=True raises an exception, and I've mirrored that behaviour. Since numba is statically typed, an alternative is to simply not support that useless parameter, which would move the error to compile time (generally a good thing) but be less compatible with numpy. Any thoughts?
  • Is there a better way to handle the array iteration? I looked into wrapping the @intrinsic scalar function into a ufunc and calling that (which uses a sensible iteration order for Fortran-order arrays), but ran into some cyclic import problems.

It doesn't support records, and probably not datetime and the like.
It still needs some tests for things like byteswapping objects.
The filename is not final because I haven't submitted a PR yet.
@bmerry
Copy link
Contributor Author

bmerry commented Feb 4, 2024

Also, is there anything that should be added/tested for CUDA? I've never tried out the CUDA support in numba.

@kc611 kc611 added 3 - Ready for Review CUDA CUDA related issue/PR labels Feb 9, 2024
@stuartarchibald stuartarchibald removed the CUDA CUDA related issue/PR label Feb 9, 2024
@stuartarchibald
Copy link
Contributor

Thanks for submitting the PR @bmerry, it's been queued for review. RE: CUDA, there's nothing to be added for now, there is much less NumPy support on the CUDA target than on the CPU target, if the need arises it can go into a separate PR. Thanks again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants