Skip to content
Valentin Haenel edited this page Mar 30, 2020 · 1 revision

Numba Meeting: 2019-11-26

Attendees: Pearu, Val, Stuart, Aaron, Todd, Siu, Stan

0. Feature Discussion

  • Status of test failures on master
    • arange dtype problem
      • WIP
    • cuda unexpected success
      • skipped now
    • test_dataflow.py scipy/cython import issue
      • fixed
  • Function as first class
    • Pearu working on it
  • Set num thread PR
    • Stuart thinks all problems are resolved
    • Need to work on testing
  • Py3.8 status
    • now passing on everything
    • known tuple hash mismatch
    • mostly ready looplift fix
  • Mixed-type tuple iteration

1. New issues

  • #4890 - TypingError raised with float32 arguments to np.interp
  • #4889 - wont njit compile usage of a tuple of functions (with same real->real interface) in a loop
  • #4888 - TypeError using default arguments with multiprocessing
  • #4887 - numba.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
  • #4886 - Acquiring/releasing __cuda_array_interface__
    • needs discussion
  • **** #4884 - Switch internal Numba RNG implementation (on CPU) to use new NumPy random C interface
    • eventually, np1.18 will have the C API.
    • nothing to do yet
  • #4879 - Typed lists of typed dicts can't be compared for equality
    • Needs debugging
  • #4877 - Could numba support big integers for pow(x, y, z) or x**y mod z?
    • Yes, but should be delegated to external multi-precision library
  • #4876 - Struct spec/implementation of __cuda_array_interface__
    • Discussion outlines reasonable approach
  • #4875 - unexpected success on warp_divergence test
    • Going to skip this test for now
    • Caused by new Python 3.8 cfg changes, may not be a bug, but still checking
  • #4873 - Add functionality to a numba.typed.Dict
    • request for heterogenous dictionary
  • #4872 - Is it possible to let a jit CPU function call a jit CUDA kernel?
    • I wish
    • would be nice someday
  • #4870 - import numba crashed after import torchvision
    • Need to confirm whether or not this happens with wheels that are statically linked.
    • Also in favor of prefixing all LLVM symbols
  • #4869 - Can not use vectorized, nopython functions (passed as arguments) inside njit'd functions
    • vectorized functions not same as jit() function issues

Already Closed

  • #4878 - a problem with using numba with python operation

2. New Open PRs

  • **** #4883 - [WIP] from future typed list

  • **** #4881 - fix refining list by using extend on an iterator

  • #4871 - Implement str.translate()

  • #4868 - Add functionality for str.endswith()

Already merged/closed

  • #4880 - arange returns max of bounds types
  • #4885 - suppress spurious RuntimeWarning about ufunc sizes
  • #4882 - Fix return type in arange and zero step size handling.
  • #4874 - Bump to llvmlite 0.31

4. Next Release: Version 0.47.0, RC=December 19

  • CPython 3.8

  • Requests for 0.47 (last release for the year) - jitclass performance issues - llvm 9 trial - CTK libcudadevrt.a - CI needs to take 50% of current time - Val & Stu already looking at this - also checking Azure CI config to avoid wasting compute time - Caching: - transitive dependency - other issues: i.e. function as argument, with objmode - distributing cache - Immutable list and deprecating reflected list - Switch to pytest (see above) - Using Numba to generate LLVM/NVVM IR for different targets https://github.com/numba/numba/issues/4546 - @overload for gpu

Clone this wiki locally