Skip to content
Valentin Haenel edited this page Mar 30, 2020 · 1 revision

Numba Meeting: 2020-03-10

Attendees: Graham, Pearu, Stuart, Val, Siu, Aaron, Todd, Gui

0. Feature Discussion

  • #5162 - is it converging?
    • still fanning out
    • Graham will try to summarize the ideas in there
    • Move to 0.50
  • #5144 - status?
    • delete the problematic test
  • state of jitclass
    • Q: is makeover was planned for 2019 didn't happen, is it in the plan for 2020? Is there any value in spending time trying to do new stuff/improvements? Are they going to be entirely re-done?
      • Q3 at earliest.
      • Don't burn lots of time.
      • Might need a lot of rewrite.
      • Might want to re-think what it is for
  • first class function
    • target to finish this week
    • discard the first PR.
  • proposal 1: burndown behaviour/release cycle
    • 4 dev weeks + 2 weeks burndown
    • dev weeks:
      • works on high risk features
      • have kickoff at the start of cycle
        • declare the high-risk features which of us will be doing
        • review these features
    • burndown weeks:
      • no major features
      • focus on bug fixing
    • No objections.
  • proposal 2: triager? issues czar?
    • start with try out period to help define this role.

1. New Issues

  • #5367 - SimpleIteratorType issues
  • #5365 - Type inference for integers is odd
  • #5364 - RuntimeError: missing environment when iterating over closure in intrinsic
    • odd; needs digging
  • #5363 - Make is really obvious that the "dev" docs are for unrelease/development versions
    • doc
  • #5358 - Support 0d arrays in cuda
    • TODO: Siu
  • #5355 - typed.List's empty_list function cannot accept allocated parameter in nopython mode
    • fix in #5356
  • #5354 - numpy.take cannot compile with typed.List as indices parameter
    • relates to reflected list deprecation
  • #5353 - typed.List variables cannot be assigned from reflected_list
  • #5350 - yield statement yields leaky memory
    • confirmed memory leak
  • #5349 - Parfors, LLVM IR Parsing error from conditional needing cast.
  • #5344 - literal_unroll != numba.literal_unroll?
    • the common getattr thing
  • #5339 - Add a as_constant helper for forcing computations to compile-time

Closed Issues

  • #5362 - 'numba' has no attribute 'set_num_threads'

2. New PRs

  • #5366 - Remove reference to Python 2.7 in install check output
  • #5361 - 5081 typed.dict
  • #5360 - Remove Travis CI badge.
  • #5359 - Remove special-casing of 0d arrays
  • #5357 - Small fix to have llvm and asm highlighted properly
  • #5356 - implement allocated parameter in njit
  • **** #5352 - Add shim to accommodate refactoring.
    • integration tests are mostly passing
    • will be announcing to get developers to get it tested
  • **** #5351 - SSA again
    • Pass strips SSA just prior to lowering to match the strict semantic requirements for LLVM
    • SSA can be constructed and stripped manually, good for testing/R&D.
    • State now is slowly solving bugs in impl.
    • Need to see what happens in parfor lowering.
    • SSA labelled issues, add as tests.
    • Parfors would prefer not to have SSA-reconstructor by its passes
    • Warn/validate on the event of reconstruction that cause violations, aim for full SSA in 0.50.
      • plan it!
  • #5348 - Add support for inferring the types of object arrays
  • #5347 - CUDA: Provide more stream constructors
  • **** #5346 - Docs: use sphinx_rtd_theme
    • Looks great!
  • #5343 - Fix cuda spoof
  • ** #5342 - Support for tuples passed to parfors.
  • #5341 - Misc/testpr5172
    • testing only
  • #5340 - [WIP] Package serialization functions into a pickleable class

Closed PRs

  • #5345 - Disable test failing due to removed pass.
  • #5338 - Remove refcount pruning pass

3. Next Release: Version 0.49.0, RC=March 19

  • Requests for 0.49

    • ** #5189 (__cuda_array_interface__ not requiring context)?
    • TBD
  • Future Relase Plan

    1. LLVM9
    2. Explore LLVM pass for refcount pruning
Clone this wiki locally