Skip to content
Valentin Haenel edited this page Jun 17, 2020 · 1 revision

Numba Meeting: 2020-06-16

Attendees: Siu, Aaron, Graham, Guilherme, Pearu, Stuart, Todd, Ehsan, Val

0. Feature Discussion

  • Open meeting

    • Siu: useful to hear from community, productive discussions. Good as a first meeting to hear what people were very passionate about.
    • Graham: What was said was closely aligned with pain points of which we were aware.
    • Ehsan: Scale of attendance was encouraging. Agenda/organisation could do with some improvement. Post topics ahead of time.
    • Siu: Feels similarly, first run was an experiment, now we have this experience it'll be easier to focus on a specific agenda.
    • Pearu: Agenda suggestion
      • 10 min introduce new features to Numba
    • Ehsan: Agenda suggestion
      • project direction, feedback
    • Ehsan: Level of engagement was impressive.
    • Siu: Folks essentially being representitives of larger projects.
  • Next open meeting:

    • 2nd Tue of each Month
    • July 14th
  • numba.discourse.group

  • contrib module https://numba.discourse.group/t/addons-contrib-repo/29/2

    • Hameer: concern about what is public vs private API
    • Stuart: private API needed by extensions need to move to extending. The contrib/addon package can be a guideline of what needs to be moved.
    • Ehsan: asked Stuart about numba-scipy
    • Stuart: community asked why can't scipy use numba directly
    • Hameer: scikit-learn community they numba lacks tooling to profile, debugging, etc..
    • Stuart: may be better use of resources to do that.
    • General discussion about where e.g scipy support should go.
    • Stuart/Siu prefer outside along with e.g. perhaps move NumPy.
    • Make as discussion topic for community meeting.
  • Do we do a 0.50.1

    • issue: typing error in cuda are eaten
    • issue: deprecation notice didn't get bump
    • issue: get_terminal_size
  • High risk items 0.51:

    • Explore moving SSA pass up the pipeline
      • test if more passes can work in SSA form
    • LLVM 10
      • optional:
        • MCJIT -> ORC-JIT ?
        • issue with MCJIT leaking JIT'ed module
        • LLVM C++ refct pruning FunctionPass
    • with objmode caching
    • CUDA Dispatcher / kernel interface:
      • Share machinery with CPU target (e.g. for multiple signatures, typeconv, etc.)
  • Typed Set/List deprecation

    • challenging to have a switch to make typedlist by default
    • the same goes for numba.typed.Set -- once it has been written

1. New Issues

  • #5868 - TypeError: compile_kernel() got an unexpected keyword argument 'boundscheck'
  • #5865 - Minimum time for deprecation cycles?
  • #5864 - Support for np.fft.fft, np.fft.ifft etc.
  • #5863 - Add Table with llvm, llvmlite and numba compatibility to Readme
  • #5860 - Typing errors in device functions aren't properly reported
    • breaking CUDA usability
  • #5858 - Can numba accelerate a loop with a trained xgboost model in it?
  • #5854 - Installating dependencies with python setup.py install raises SyntaxError
  • #5853 - Works as plain python, core dump as @njit
  • #5847 - Improved error message for non existing variable
  • #5845 - Refactor deepcopy func_ir and its statements.
  • #5844 - Refactor source location info into Template
  • ** #5839 - Dispatching for custom types is way slower than builtin types
  • #5836 - CUDA: Passing opt=0 to NVVM doesn't work
  • #5835 - CUDA debug info is invalid - compile units have an empty list of subprograms
  • #5831 - Make boxing two-phase to more efficiently support dependent types
  • #5829 - Incorrect results when working with transposed arrays
  • #5828 - native<->objmode calls overhead for small functions
  • #5827 - Can't njit code containing an np.ndarray subclass

Closed Issues

  • #5867 - Numba doesn't accelerate recursive function despite nopython=True passing
  • #5855 - Error when install numba by pip
  • #5848 - operator.pos fails on strided arrays
  • #5843 - Numba 0.50.0 Checklist
  • #5837 - Incorrect output when calling np.array on a list (python 3.8)
  • #5832 - Slower initial compilation with newer numba from Miniconda
  • #5825 - Weird behavior for loop

2. New PRs

  • #5866 - [WIP] Implement str and repr builtins
  • #5861 - Added except for possible Windows get_terminal_size exception
  • #5857 - CUDA docs: Add notes on resetting the EMM plugin
  • #5856 - Add support for conversion of inplace_binop to parfor.
  • #5851 - CUDA EMM enhancements - add default get_ipc_handle implementation, skip a test conditionally
  • **** #5850 - Updates the "New Issue" behaviour to better redirect users.
  • #5846 - CUDA: Allow disabling NVVM optimizations, and fix debug issues
  • #5841 - cleanup: Use PythonAPI.bool_from_bool in more places
  • #5840 - Typed Tuple
  • #5834 - Fix the is operator on Ellipsis
  • #5826 - CUDA: Add function to get SASS for kernels

Closed PRs

  • #5862 - Do not leak loop iteration variables into the numba.np.npyimpl namespace
  • #5859 - CUDA: Fix reduce docs and style improvements
  • #5852 - CUDA: Fix cuda.test()
  • #5849 - Setitem for records when index is StringLiteral, including literal unroll
  • #5842 - Update CHANGE_LOG for 0.50.0 final.
  • #5838 - Ensure Dispatcher.__eq__ always returns a bool
  • #5833 - Fixes the source location appearing incorrectly in error messages.
  • #5830 - doc: Mention that caching uses pickle

3. Next Release: Version 0.51.0, RC=22 July, Final 29 July?

  • Requests for 0.51

  • high risk stuff for 0.51.

  • 0.51 potential tasks (To be updated)

4. Upcoming tasks

  • Opening up the numba meeting
Clone this wiki locally