Skip to content
Valentin Haenel edited this page May 4, 2021 · 1 revision

Numba Meeting: 2021-05-04

Attendees: Brandon, ToddA, Val, Stuart, Siu,

NOTE: All communication is subject to the Numba Code of Conduct.

0. Feature Discussion/admin

  • General updates:

    • vendored cloudpickle 1.6.0
    • refactor of hardware target API
    • ROCm unmaintained status
      • Reasons:
        • Difficulty to port to the new target extension API
        • Current code only works with ROCm v2 series but latest is v4.
        • Lack of active users
        • Lack of testing hardware for newer ROCm HW
      • Will create a new repo to document the above
    • No compilation on import
      • Faster import time!
      • No sideeffect on import
  • NumPy subclassing discussion

    • Type-based

    example 1: Todd's PR proposal

    @allocator(MyType)
    def impl():  # need a target
        ...  

    example 2: using @overload + target

    @overload(allocate, target="mygpu")
    def ol_allocate(size):  # need a type
        ...

    example 3: as the (class)method of the Type

    @intrinsic(target="mygpu")
    def intrinsic_allocate(tyctx, size):
        # can get the target
        # get a sig
        def impl(cgctx, builder, args, llargs):
            ... stuff
        return sig, impl
        
    @overload_method(MyType, "allocate", target="mygpu")
    def ol_allocate(): 
        
        def impl():
            intrisic_allocate()
        return impl
        
    @jit
    def foo():
        MyTypeInst.allocate(sz)

    example 4: allocator on target context

    @intrinsic(target="mygpu")
    def allocate(tyctx, size):
        def impl(cgctx, <stuff>):
            cgctx.allocate()
        return impl

    Question. Can it be guaranteed that for a given hardware for a given type there is a single allocator.

    Resolution:

    • Example 3 is needed
    • @overload_classmethod is needed
    • to determine how challenging it is, we may need to adopt example 1 temporarily.
  • Dicussion on https://github.com/numba/numba/pull/6996

  • Minimum 4 weeks away from current release. Reminder for folks to submit large PR soon to make it into the release.

1. New Issues

  • #6984 - AttributeError: 'Array' object has no attribute 'shape'
  • #6980 - jax.ndarray.shape not working in @njit with error `unknown attribute shape of readonly buffer
    • Probably best to suggest to do what options exist
  • #6979 - LLVM IR parsing error when np.bool_ used inside a closure
    • Lot's of debugging with an eventual very short fix, PR avail

Closed Issues

2. New PRs

  • #6991 - Move ROCm target status to "unmaintained".
  • #6990 - Refactor hardware extension API to refer to "target" instead.
  • #6989 - update threading docs for function loading
  • #6988 - Refactor fold_arguments and add typing
  • #6987 - Implement int64() cast and __hash__ for datetime-like types.
  • #6986 - document release checklist
  • #6985 - Implement static set/get items on records with integer index
  • #6983 - Support Optional types in ufuncs.
  • #6981 - Fix LLVM IR parsing error on use of np.bool_ in globals

Closed PRs

  • #6982 - [DO NOT MERGE] Enh/cloudpickle hack
  • #6978 - Implement operator.contains for empty Tuples
  • #6977 - Vendor cloudpickle

3. Next Release: Version 0.54.0/0.37.0, RC=June 2021

Clone this wiki locally