Minutes_2021_04_06
Valentin Haenel edited this page Apr 12, 2021
·
1 revision
Attendees:
NOTE: All communication is subject to the Numba Code of Conduct.
-
0.53.1 no reported (new) issues
-
Implementing new features as per cpython and acknowledging protocols.
- Example:
hash()
type.__hash__()
- Existing example:
str()
- Graham: a difficult situation
- cuda array interface
def foo(x): hash(x) @overload(hash) def ol_hash(x): def impl(x): return x.__hash__() return impl @overload_method(mynewtype, '__hash__') def ol_mynewtype_hash(thetype, ): def impl(thetype): return "this is a poor hash" return impl
@overload(str) def ol_str(x): if query_type_has(x, "__str__"): def impl(x): return x.__str__() else: def impl(x): return "<default .....>" return impl
- other point for consideration:
- avoid spelling it twice/in-two-place
- just having
@overload_method
should give all the information to query for a method in a type.
- just having
- target specific?
- avoid spelling it twice/in-two-place
-
discussion evolved into purity in typing vs informative user error for a specific target
- Data type is pure (the same across targets)
- Function type is impure (depends on whether the target registered and implementation, and what's inside it)
- In other words: typing is pure in the data model, but not in functions
-
Typing context will have to care about the hardware because implementation will potentially cross a function boundary
- However a lot of this will be generic and be able to use generic implementations anyway
- e.g.
__str__()
,__hash__()
,__len__()
, etc., the same on all targets
-
"Initial support for Numpy subclasses' https://github.com/numba/numba/pull/6148
- #6895 - := operator cannot be jitted by at least the cuda module
- #6894 - closure inlining not dealing with star-args correctly
- #6893 - Catch incompatible cudart arch
- #6892 - Numba and tbb version 2021 packages incomatibility on Linux
-
#6891 - Cuda functions treat
*args
as a normal parameter - #6887 - Allow setting across slices of slices
- #6884 - CUDA matrix multiplication example is probably wrong
- #6882 - LoweringError using njit(parallel=True) when using python 3.8 (but not when using 3.7)
- #6875 - Issue with __synchronize() identifier undefined.
- #6890 - fixes #6884
- #6889 - Address guvectorize too slow for cuda target
- #6888 - Get overload to consider compiler flags in cache lookup
- #6886 - CUDA: Fix parallel testing for all testsuite submodules
- #6885 - CUDA: Explicitly specify objmode + looplifting for jit functions in cuda.random
- #6883 - Add support of np.swapaxes #4074
- #6881 - support the 'reversed' function for ranges, lists, and arrays
- #6880 - Add attribute lower_extension to CPUContext
- #6879 - CUDA: NumPy and string dtypes for local and shared arrays
- #6878 - CUDA: Support passing tuples to ufuncs
- #6877 - Add doc for recent target extension features
- #6874 - Clarify conversions to numeric types
- #6876 - Add trailing slashes to dir paths in CODEOWNERS