New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
detect_equilibration_binary_search is not memory safe #168
Comments
I think that sounds like a reasonable workaround. |
it is necessary to clear the cache after every fft call, so I have added the necessary lines to the statisticalInefficiency_fft function in PR #169 |
P.S. tests pass locally |
Thanks! |
Thanks for merging! On Fri, Jan 30, 2015 at 3:17 PM, Kyle Beauchamp notifications@github.com
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
today my cluster administrator has shut down 2 nodes because my jobs using detect_equilibration_binary_search had filled 100% of the memory and were swapping dangerously with the disk. I spent the afternoon looking into it and I have narrowed down the problem to the implementation of np.fft which is called by statsmodel through sm.tsa.stattools.acf.
Turns out that "fft stores a cache of working memory for different sizes of fft's, so you can run into memory problems if you call this too many times with too many different n's", which is exactly what we do during the binary search.
I have added my 2 cents to an issue that I found in the numpy github repo. My solution at the moment is to add a call to
which clears the global variable
_fft_cache
after each for loop or after each iteration. Would you be interested in a PR that addresses this issue?The text was updated successfully, but these errors were encountered: