Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementation of LU and Cholesky solvers for linear systems #167

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

jurajHasik
Copy link

An initial implementation for LU and Cholesky solvers for linear systems. The linsystem function
takes Tensor A, interprets it as a matrix of coefficients and solves for the right hand side passed as tensor B.
A choice for the solver, method="LU" or "Cholesky", is passed in through Args system with default being LU.

@mtfishman
Copy link
Member

Hi, thanks for the pull request. You may have noticed that I recently added an implementation of gmres, which is an iterative solver for linear systems. In general in tensor network calculations, I think it is better to use something like gmres/bicgstab, since you can take advantage of the sparse nature of the tensor network (i.e. for solving A*x = b for x, one does not need to form the matrix A, but instead just needs the result of acting A on a vector, which in tensor network calculations is usually much more efficient since the linear map A is usually made up of the contraction of multiple tensors).

Having an alternative based on an LU/Cholesky decomposition may be useful as well. Did you have a particular use case where something like gmres() would not be sufficient?

Also, do you have an implementation for the case with IQTensors?

@jurajHasik
Copy link
Author

The use case which led me to include Cholesky & LU comes from the full update algorithm for iPEPS
networks. Applying either 2-site gate or 3-site gate leads to a minimization problem wrt. to tensors
on which the gate acts (A,B) or (A,B,C). One approach to solve such problem is in the spirit of alternating least squares (ALS). Fix all tensors but one, say A, and minimize a resulting quadratic problem

minimize wrt. A: A^\dag * M * A - A^\dag K,

where M and K are matrix and vector obtained from the remaining part of the network. Then move to another tensor and repeat until the desired accuracy is reached. The quadratic problems obtained through the course of ALS are not necessarily sparse (no symmetries imposed on tensors).

Supplying just matrix-vector product:
The number of operations required to contract whole network resulting in M * A is larger than just the number of operations for matrix vector product. Hence its more efficient to compute M once and store it for iterative solvers. This might change depending on (block)sparsity of network used to generate M * A.

LU/Cholesky vs GMRES:
Usually iterative methods perform considerably better for sparse preconditioned systems. But since
my system was neither sparse nor preconditioned, LU/Cholesky was comparable to BiCG solver
and also stable. Stability should be less of an issue for GMRES.

Unfortunately, I do not have an implementation for IQTensors at hand.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants