Skip to content
esc edited this page Mar 16, 2022 · 1 revision

Numba Meeting: 2022-03-08

Attendees: Siu Kwan Lam, Graham Markall, Andre Masella, Benjamin Graham, brandon willard, Guilherme Leobas, Jim Pivarski, Nick Riasanovsky, stuart, Todd A. Anderson, Lehman Garrison, Kaustubh Chaudhari, Travis Oliphant

NOTE: All communication is subject to the Numba Code of Conduct.

Please refer to this calendar for the next meeting date.

0. Feature Discussion

1. New Issues

  • #7870 - Add support of keyword argument to Dispatchers from numba.cuda.jit
  • ** #7872 - Interpreter hangs when running parallel function in subprocess with workqueue backend
  • #7873 - AttributeError: module 'numba' has no attribute 'jit'
  • #7874 - Improve error message in Numpy advanced indexing
  • #7879 - Type annotations with custom classes are not supported when pickling jitted functions.
  • #7880 - Problem unifying return-types for Numpy arrays
  • *** #7881 - objmode() triggers fork()
  • #7883 - pyspark, cluster mode, cannot cache function '__shear_dense': no locator available
    • Request: "NUMBA_DISABLE_CACHE"
  • #7886 - Numba implementation of numpy.random.vonmises does not match NumPy
  • #7887 - Performance of Numba Runtime Allocations
    • Ask about the use-case?
  • #7888 - incorrect float64 type assigned
  • #7889 - Lowering error
  • #7890 - Numba function works randomly with same given input, is this a bug?

Closed Issues

  • #7876 - Numba website shows wrong? version number
  • #7882 - Unable to use numba on MAC m1

2. New PRs

  • #7871 - accept kwargs for cuda kernels. cache kernel configurations
  • #7875 - Merged changes from fix/6543 and added comments
  • #7877 - Fix invalid resolution with multiple ConcreteTemplate
  • #7878 - CUDA: Remove some deprecated support
  • **** #7884 - Implement getattr builtin.
  • #7885 - Testhound/fp16 operators1

Closed PRs

3. Next Release: Version 0.56.0/0.39.0, RC,

Clone this wiki locally