Minutes_2018_07_19
Stan Seibert edited this page Jul 26, 2018
·
1 revision
Attendees: Siu, Stuart, Todd, Ehsan
- #3131: LoweringError: Failed at object (object mode frontend)
- Stuart has a patch
- #3130: Getting Thread IDs in prange
- Need discussion
- May not make sense to our thread-scheduling
- #3129: Numpy array method "sum" with "axis" argument not supported in parallel mode.
- Patch #3127 will make this possible
- #3123: np.nditer erroneously works on 0 size arrays bug
- in family of issue of 0-size array
- #3121: Feature request
np.fill_diagonal
- #3120: ndarray.flat does not support setitem bug
- #3119: min/max behaves incorrectly on 0-size array bug
- in family of issue of 0-size array
- #3118: [Question]List comprehension compile failed
- Similar to #3131
- #3117: Strange behavior of flat/ravel when combined with nopython/parallel
- many related issues in other tickets
- #3115: np.broadcast support
- feature request
- #3114: Add np.asarray support
- feature request
- #3113: function defined with @guvectorize target='cuda' but not used (and no NVIDIA card) gives error
- #3112: Bug: numba does not preserve dtype in scalar multiplications
- Issue w/ promotion rules.
- sklam should take a look
- #3111: Feature request. Iterating by dimension over ND array
- #3110: Select
@vectorize
and@guvectorize
target at call time, not decoration time- feature request. need deep refactoring in ufunc machinery
- #3107: List supported architectures / hardware / OS in docs
- doc change
- #3106: Accommodate proposed changes in CUDA toolkit for windows
- package issue
- #3105: NUMBA_DUMP_ Environment Variables not Respected
- need to review (used to work?)
- check on x86
- #3103: parfors lowering error ParallelAccelerator
- can we return a better error message?
- see PR #3133
- #3101: numba vectorize returning list/array
- NumPy vectorize allows returning lists
- We should not do that in Numba vectorize
- #3100: test_deadlock_on_exception failing on CUDA testers
- Resolved by PR
- #3099: error converting array of floats to integers
- solvable
- Stuart runs diagnostic and tells us what is wrong
- #3098:
test_optional_unpack
Heisenbug- very hard to reproduce (only Travis-CI)
- Need to learn how to open live shell to Travis-CI
- #3095: Why does numba not work with this nested function?
- Error message fixed on master
- #3092: ParallelAccelerator without threads
- Want some kind of optimization level independent of parallelization
- Experiment with removing gufunc, caching fails
- #3090: Issue with decorators
- This is going to be very hard to support
- #3087: unable to use parallel for bytearray inputs
- Type system creates some confusion
- Siu will take a look
- #3086: support for scipy.special functions : feature request
- Looks like someone is trying to do this as a pure Python library
- Could also call C pointers
-
3134 [WIP] Cfunc x86 abi
- Great PR, need to isolate ABI
-
3132 Adds an ~5 minute guide to Numba.
- Stan will review
-
3128 WIP: Fix recipe for jetson tx2/ARM
- Will merge when ready
-
3127 Support for reductions on arrays.
- First pass review, waiting for changes
-
3124 Fix 3119, raise for 0d arrays in reductions
- Stuart will look at review comments
-
3122 WIP: Add inliner to object mode pipeline
- Fix closure inlining in object mode
- Needs test and review
-
3093 [WIP] Singledispatch overload support for cuda array interface.
- Needs review
- 3046 Pairwise sum implementation.
- 3017 Add facility to support with-contexts
- #2999 Support LowLevelCallable
- #2983 [WIP] invert mapping b/w binop operators and the operator module
- #2977 Added docstring coding convention and specified line terminator to developer docs
- #2976 [WIP] Expansion of developer documentation.
- #2959 np.unique return_counts support
- #2950 Fix dispatcher to only consider contiguous-ness.
- #2942 Fix linkage nature (declspec(dllexport)) of some test functions
- #2894: [WIP] Implement jitclass default constructor arguments.
- Need to give some feedback to contributor on approach
- #2817: [WIP] Emit LLVM optimization remarks
- Want to refactor to not require temp file on disk
===========================
- Experimental python mode blocks
- Refactored threadpool interface
- AMD GPU backend
- Parallel diagnostics
- Usual collection of bug fixes