New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reduce memory usage for locking==false
#40
Comments
I agree that saving space is a desirable feature, and we didn't put the feature in PRIMME 3 because the memory savings are moderate (less than a factor of two). Nevertheless PR are welcome and I am willing to put effort in integrating them into the project! One easy way that I see of how to do this is by adding a flag in primme_params indicating whether the user has passed a big enough space in evecs to hold maxBasisSize vectors. If that's the case, main_iter will use as V the pointer provided as evecs instead of allocating a new space. About W, PRIMME stores the A*V vectors in W, where A is the matrix to diagonalize. Unfortunately saving the space of W is more challenging and will require extra computations and a deeper knowledge of how eigensolvers and PRIMME work. |
From Algorithm 3.1 I get the impression that the size of |
Notice that each loop of the while starting at step 5 updates b columns of V (let's ignore step 6). Updating H (step 13) can be done with W storing the product of A times the last b columns of V. But computing w^(m) = W * s_{0:b-1} (step 7), needs the whole W. It is possible to do step 7 like this w^(m) = A * (V * s_{0:b-1}), which is, first, compute V * s_{0:b-1}, then multiply it by A, and finally store the result on w^(m). But this approach require to multiple by A more times and that can be more expensive sometimes. |
In Section 3.4.1 (Computational Requirements: Memory) of your paper "PRIMME: PReconditioned Iterative MultiMethod Eigensolver: Methods and software description" it is mentioned that the amount of allocated memory can be reduced when locking is disabled.
In the code I see that
V
andW
are allocated unconditionally, so it seems like there is currently no way to reduce the amount of used memory.When diagonalizing very big matrices (which are then never actually stored in memory), getting rid of
numEvals
vectors is a very desirable feature. Are there any plans to implement it? Would you accept PRs in this direction?Thanks!
The text was updated successfully, but these errors were encountered: