Minutes_2021_04_27
Valentin Haenel edited this page Apr 27, 2021
·
1 revision
Attendees: Brandon, Caleb, Hernan, Leo, Luk, Todd, Siu, Stu, Val
NOTE: All communication is subject to the Numba Code of Conduct.
- Python 3.6 removal.
- following NEP 29
- NumPy 1.20 and minimum version move to 1.17?
- still have a few regressions
- numpy changes behavior
- NaN cast to int
- still have a few regressions
- No JIT on import.
- Vendoring Cloudpickle 1.60 PR 6977
- Already have parts of this in the code base, this just gets everything else.
- LLVM 11 update.
- llvm 11 builds are done as as llvmlite builds. All under
llvm11
label on the Numba channel. - Numba currently failing, mostly SVML tests, also some issue in aarch64 that need investigation.
- llvm 11 builds are done as as llvmlite builds. All under
- Brandon talks about Aesara from PyMC team
- Theano used to produce C files to be compiled by a C compiler.
- plan to use Numba to replace the C backend as a default backend
- already done a JAX backend
- Questions:
- Q: Stuart asks about how not JAX as the default
- A:
- JAX needs as many workaround
- limitation from tracing
- shape cannot be symbolic in tracing
- Brandon also talks about http://minikanren.org/
- A DSL for logic programming
- python binding: https://github.com/pythological/kanren
- #6976 - parallel mode ignores divide by zero errors, sometimes fills with nonsense
- #6975 - Heap slower than without numba??
- #6974 - Error while import
- #6973 - "cannot import name '_typeconv' from 'numba.core.typeconv'"
- *** #6972 - Wrapper or type to avoid inlinng
- caching function argument vs inlining
- user want noinline, function argument be passed as pointer
- TODO: check first class function is working and its effect with cache
- Luk comments:
- workaround using a container of function pointer
- a partial signature; i.e. mark an argument to use function pointer
- #6969 - Parfor reductions not working when overload uses inline='always' and parallel=True flags
- #6967 - CUDA vectorize returns array from scalar input
- #6965 - CUDA: Atomics tests fail with NumPy 1.20
- #6962 - Parallelism that worked in 0.52 no longer works in 0.53.1
- #6960 - Regression with parfor in numba=0.53.1 when aliased arrays are struct attributes
- #6959 - cannot import numba inside ipykernel: OSError: Could not load shared object file: llvmlite.dll
- #6957 - Caching of functions with keyword arguments
-
#6956 - Excessive recompilation due to use of
literally
and potentially unaware in memory cache. - #6955 - CUDA: Checklist of features required for Awkward Array extensions
- #6954 - @vectorize with *args failing
-
#6952 - Vectorized ufuncs don't respect
casting
keyword argument - #6951 - Support for array of objects / jitclasses
- #6950 - Numba master breaks cuDF
- #6949 - x += x.T and x = x + x.T yield different result.
-
#6947 - Losing local struct variable pointer in jitted function
- another struct scalar out live the array because of legacy code
- #6946 - Performance issue: conditional inline array allocation (not called) or different order of conditions -> unexpectedly slow
- #6943 - Error storing record view
- #6942 - Inconsistent self-values assignment within 2-dimensional array
- #6970 - @intrinsic not working with CUDA
-
#6968 - CUDA: Implement
printf()
- #6963 - TBB test_fork_from_non_main_thread failing intermittently
- #6964 - Move minimum supported Python version to 3.7
- #6961 - Update overload glue to deal with typing_key
-
#6953 - CUDA: Fix and deprecate
inspect_ptx()
, fix NVVM option setup for device functions - #6948 - Refactor registry init.
-
#6971 - Fix CUDA @
intrinsic
use - #6966 - Fix issue with TBB test detecting forks from incorrect state.
- #6958 - Inconsistent behavior of reshape between numpy and numba/cuda device array
- #6945 - Fix issue with array analysis tests needing scipy.
-
#6944 - CUDA: Support for
@overload
- #6941 - ABC the target descriptor and make consistent throughout.