New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.19.3 regression: Memory Error on un-pickle of large arrays #17825
Comments
Maybe connected with int32/int64 confusion and windows? |
Just noticed this was tagged for 1.19.5. There were very few changes between the two versions and right now I am not sure where to look for a regression. |
@seberg, yes, everything else is the same: computer, python, all python libraries. I just install |
@yaav would you be able to quickly also check 1.19.4, just to be sure it wasn't some very strange thing around OpenBLAS? I will try to reproduce it today (or at least check out if I can see a memory bloat difference, I guess). |
@seberg, unfortunately 1.19.4 doesn't work for me at all:
|
@yaav 1.19.4 and 1.19.3 are the same except for the OpenBLAS library. |
Well, the only serious changes between 1.19.2 and 1.19.3 are OpenBLAS and the buffer-info fix. And I don't see how the buffer info fix can be incorrect (and doubt that would go unnoticed). Yes, buffers are used in pickle, but the contiguous flag is not touched, and the stride sanitation doesn't seem to have a bug, so I honestly don't see what could have changed between these two versions... |
Looks like you are running Window version 2004. Out of curiosity, when did you upgrade? Might be worth trying 1.20.0rc1 to see if the problem is still there. |
@charris, the Windows upgrade date is June 23rd, 2020 |
Just to be clear. I am officially out of ideas. The only idea I had was that the pickle5 buffer export fails for some reason, so it uses a more bloated way to export the buffer. I cannot reproduce that, checking explicitly for whether that buffer export failed. For the moment, I assume that this is random behaviour, and the memory peak is just randomly higher on 1.19.3 and above. I do not know why, nor what the maximum realistic memory usage per thread is (to estimate what the maximum memory usage could be in the worst case). |
Not sure if this is a reasonable candidate, but since @yaav is on In any case, if @yaav were able to upgrade past build 20270, we could test that hypothesis (though I realise this is a non-trivial effort since that build is currently only in the dev channel, not beta or preview yet). |
I kicked this off to the 1.20.1 release as I expect the Windows update will be out before then. |
Windows update is out. @yaav can you update to get the fix and try to reproduce? |
@mattip, just checked the |
@yaav Thanks for the update. I'll close this now. Feel free to reopen if the problem returns. |
This looks like numpy
1.19.3
regression, as it works well with numpy1.19.2
and all other packages unchanged.Environment:
Sample code to trigger the issue
Be careful! >40Gb RAM is required!
Expected output:
Actual output:
The text was updated successfully, but these errors were encountered: