Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feature request: LOBPCG #58

Open
joachimbrand opened this issue May 24, 2022 · 1 comment
Open

feature request: LOBPCG #58

joachimbrand opened this issue May 24, 2022 · 1 comment

Comments

@joachimbrand
Copy link

Hi,
I'm a big fan of the package. Being able to solve eigenvalue problems without vectors and matrices is awesome. Thanks you very much!

I was wondering about the possibility to support the LOBPCG (locally optimised block preconditioned conjugate gradient) method in KrylovKit. See https://en.wikipedia.org/wiki/LOBPCG

My use case is that I would like to solve really large matrix-free problems (exact diagonalisation of some quantum Hamiltonian; matrix elements are computed on-the-fly) with (MPI-)distributed "vectors" that are indexed in a fancy way. KrylovKit works very well, but memory usage becomes a bottleneck. I read about LOBPCG and it seems like it might have an advantage over Lanczos/Arnoldi in requiring less memory:

  • only a few (four?) vectors or so have to be kept around at a time (for the single eigenvector version). Other features:
  • only one vector-matrix multiplication is required per iteration;
  • preconditioning is easily supported (and very helpful), see Preconditionning #7;
  • supports generalised eigenvalue problems.

It also seems to be good with clustered/degenerate eigenvalues when the block size is large enough, which might help with #57.

A Julia implementation of LOBPCG is available at IterativeSolvers.jl. It requires actual Vectors, though - hence my suggestion to add the method to KrylovKit.jl.

Fwiw, testing on a 90_000^2 (ordinary) sparse matrix, KrylovKit's eigsolve was slightly faster (but 20x more memory allocations: allocations: 214.506 MiB) than lobpcg without preconditioning (allocations: 10.034 MiB). With a simple diagonal preconditioner,lobpcg was about twice faster.

@Jutho
Copy link
Owner

Jutho commented Oct 28, 2022

Hi @joachimbrand , my apologies for the late response.

LOPCG (without the B for block) is available as a special case of geneigsolve, indeed using krylov dimension 2. However, without the B, I assume that this is not what you want or need. Also, it does not support have native preconditioner support, though I expect that this can easily be mimicked by simply transforming your problem with the diagonal matrix, and undoing the diagonal matrix on the resulting eigenvectors.

In principle, the Lanczos algorithm can be run in a mode where the Krylov vectors don't need to be stored. I have this implemented in the LanczosIterator interface, but I don't have a matching implementation of eigsolve that can deal with this. What needs to be done in that case is to either store the older Krylov vectors on disk, or to recompute them when needed.

Allocations don't say all, the main issue is that every new matrix vector multiplication in KrylovKit.jl will allocate a new vector rather than overwrite an existing old vector. However, the old vectors that are no longer needed should be GC'ed. Ideally, there should also be an interface that supports to simply store the result of the matrix-vector multiplication in an existing vector; that's been on the to-do list for a long while.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants