You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for sharing this very useful package. I greatly appreciate all of the work you've put into this for batch processing, and parallel processing. Here's one possibly way to make it better. If you agree, I'll try to make a pull request.
The memory usage for the A matrices (A_new, A_, etc.) could be improved. These are the matrices that hold the spatial mask for each neuron. My data is particularly large with > 10k neurons and a high resolution ~ 2.5 megapixels, so the A matrices (which are neurons x pixels) can quickly take up several hundred GB if they aren't sparse, but they are only tens of MB if sparse. I can see that you've tried to use sparse matrices where possible, but since there is a need to switch from 2D to 3D, and 3D sparse matrices aren't allowed, full() is often called, which is where I run into an out of memory error. I found a ND sparse class on the file exchange (link below). It seems like a fantastic implementation of sparse arrays. I was able to make A ndSparse throughout the code and it seems to work okay (removes the need to call full() and reshape works fine). Can you think of any reason that would break other things?
@tjr1, sorry for the slow response. I was distracted by something else and forgot to check these github issues.
using nd sparse array sounds like a good idea. could you send a pull request so that I can evaluate whether it will introduce some conflicts with existing code. thanks.
@zhoupc Okay, I'll work on making a pull request. It was my mistake for not using git the first time I edited the code, so I'll need to do a manual file diff find all the edits. I regret I won't be able to do this until later in August at the earliest.
Hi,
Thank you for sharing this very useful package. I greatly appreciate all of the work you've put into this for batch processing, and parallel processing. Here's one possibly way to make it better. If you agree, I'll try to make a pull request.
The memory usage for the A matrices (A_new, A_, etc.) could be improved. These are the matrices that hold the spatial mask for each neuron. My data is particularly large with > 10k neurons and a high resolution ~ 2.5 megapixels, so the A matrices (which are neurons x pixels) can quickly take up several hundred GB if they aren't sparse, but they are only tens of MB if sparse. I can see that you've tried to use sparse matrices where possible, but since there is a need to switch from 2D to 3D, and 3D sparse matrices aren't allowed, full() is often called, which is where I run into an out of memory error. I found a ND sparse class on the file exchange (link below). It seems like a fantastic implementation of sparse arrays. I was able to make A ndSparse throughout the code and it seems to work okay (removes the need to call full() and reshape works fine). Can you think of any reason that would break other things?
https://www.mathworks.com/matlabcentral/fileexchange/29832-n-dimensional-sparse-arrays
Should I make a pull request for this? Or do you see this as not needed or possibly breaking other things? If so, that's okay too.
Thanks,
Tom
The text was updated successfully, but these errors were encountered: