Minutes_2021_10_19
Valentin Haenel edited this page Oct 19, 2021
·
1 revision
Attendees: Guilherme Leobas, Graham Markall, Jim Pivarski, stuart, Todd Anderson, Siu Kwan Lam, Valentin Haenel, Luk, Nick Riasanovsky, brandon willard
NOTE: All communication is subject to the Numba Code of Conduct.
-
preparing for llvmlite 0.37.1 release
- include py3.10 support
- blocked by problems pinning conda compiler toolchain
- more info on https://github.com/numba/llvmlite/pull/769
-
High effort CUDA PRs:
-
Lower effort CUDA PRs:
-
Reminder for folks to early test the "hard error" feature in https://github.com/numba/numba/pull/7397
-
Aesara update
- missing feature: https://numpy.org/doc/stable/reference/generated/numpy.ufunc.at.html#numpy.ufunc.at
- advance indexing help
- Jim's idea: calling into numpy C code instead of lowering in Numba
- #7480 - Need common util to handle Optional types in comparator
- #7482 - Failed in nopython mode pipeline len(none)
-
#7485 - Validate values on assignment in
numba.config
- #7486 - Error importing numba
- #7487 - Random shuffle error in structured array
- #7488 - A case where type inference does not halt.
- #7491 - support for set and arrays of str type
- #7479 - CUDA: Print format string and warn for > 32 print() args
- #7481 - [CI testing only] CUDA FP16 changes related to #7460
- #7483 - NumPy 1.21 support
-
#7490 - CUDA: Make the device function dispatcher a
_dispatcher.Dispatcher
subclass
- #7484 - Fixed outgoing link to nvidia documentation.
- #7489 - CUDA: Type device functions as dispatchers
- Request for 0.55
-
broadcast_to
https://github.com/numba/numba/pull/7119- driven by need for
vectorize
support in aesara - Now merged
- driven by need for
-