New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ndarray dump function (and straight cPickle) fails for large arrays (Trac #1803) #2396
Comments
I am also facing this problem. |
@zhlsk didn't try further or investigated if there is a fix in python, but I think this is not a numpy but a python problem, since: |
@seberg Thanks for your reply. You're right. It's a python problem : http://bugs.python.org/issue11564 |
In principle we could probably work around the bug anyway by passing large
|
Any progress on this? It seems that |
It's a bug in Python, the fix is to upgrade to a newer version (it's fixed in Python 3.3), and the workaround is to pickle your array yourself in smaller parts, or use some other file format than Python pickles. np.savetxt is not affected by this, and np.save is only affected for object arrays. |
Closing. |
I'm getting this when trying to cPickle a 11,314 x 8,463,980,778 sparse matrix with 352,451,719 stored elements in scipy.sparse.csr_matrix format. Python version: Python 2.7.10 |Anaconda 2.3.0 (x86_64)| (default, May 28 2015, 17:04:42) |
Original ticket http://projects.scipy.org/numpy/ticket/1803 on 2011-04-19 by trac user meawoppl, assigned to unknown.
a = zeros((300000, 1000))
f = open("test.pkl", "w")
cPickle.dump(a, f)
SystemError Traceback (most recent call last)
/home/kddcup/code/matt/svd-projection/take5/ in ()
SystemError: error return without exception set
Or using the .dump function:
a.dump("test.pkl")
SystemError Traceback (most recent call last)
/home/kddcup/code/matt/svd-projection/take5/ in ()
SystemError: NULL result without error in PyObject_Call
I am not sure if this is a numpy or Pickle/cPickle glitch. In either case, a more instructive error message would certainly help. I think the problem only happens for arrays larger than 2**(32-1) bytes but I would have to experiment more to be sure.
The text was updated successfully, but these errors were encountered: