Minutes_2022_03_08
esc edited this page Mar 16, 2022
·
1 revision
Attendees: Siu Kwan Lam, Graham Markall, Andre Masella, Benjamin Graham, brandon willard, Guilherme Leobas, Jim Pivarski, Nick Riasanovsky, stuart, Todd A. Anderson, Lehman Garrison, Kaustubh Chaudhari, Travis Oliphant
NOTE: All communication is subject to the Numba Code of Conduct.
Please refer to this calendar for the next meeting date.
- 7894 - support for the update method on dicts:
- Parity between Numba and NumPy for random sampling:
- Fork issues
-
#7870 - Add support of keyword argument to Dispatchers from
numba.cuda.jit
- ** #7872 - Interpreter hangs when running parallel function in subprocess with workqueue backend
- #7873 - AttributeError: module 'numba' has no attribute 'jit'
- #7874 - Improve error message in Numpy advanced indexing
- #7879 - Type annotations with custom classes are not supported when pickling jitted functions.
- #7880 - Problem unifying return-types for Numpy arrays
- *** #7881 - objmode() triggers fork()
-
#7883 - pyspark, cluster mode, cannot cache function '__shear_dense': no locator available
- Request: "NUMBA_DISABLE_CACHE"
-
#7886 - Numba implementation of
numpy.random.vonmises
does not match NumPy -
#7887 - Performance of Numba Runtime Allocations
- Ask about the use-case?
- #7888 - incorrect float64 type assigned
- #7889 - Lowering error
- #7890 - Numba function works randomly with same given input, is this a bug?
- #7871 - accept kwargs for cuda kernels. cache kernel configurations
- #7875 - Merged changes from fix/6543 and added comments
- #7877 - Fix invalid resolution with multiple ConcreteTemplate
- #7878 - CUDA: Remove some deprecated support
- **** #7884 - Implement getattr builtin.
- #7885 - Testhound/fp16 operators1