Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Advice needed to (maybe) use AMG solvers in a Fourier spectral solver code #379

Open
gabrielfougeron opened this issue Feb 14, 2023 · 9 comments

Comments

@gabrielfougeron
Copy link

Hi,

First, let me apologize in case this is not the right place to ask this kind of questions. If so, please point me to a better place.

Some context: I am developping a Fourier spectral sover to find periodic solutions to ODEs (with a focus on the gravitational n-body problem) in python. The non-linear equations are solved with scipy's JFNK solver. The gradient in Newton's method is given as a scipy.sparse.linalg.LinearOperator. I was (very easily btw, I appreaciate the effort of conforming to the scipy API) ale to plug all the krylov pyamg solvers in the JFNK procedure, even though some complained that the matrix was not positive definite (which it is totally legit because it's not).

Some situations are particularly difficult for the linear solver to handle and I'm trying to find better preconditionning techniques.
I've thought (maybe a bit naively) that multigrid methods might be a promising alternatives as my function spaces are naturally nested. Indeed, the unknowns are (scaled versions for conditionning of) the Fourier coefficients, so that refinement and coarsening simply amount to a different trucation of the Fourier expansion.

This being said, I'm not sure how to translate this into an AMG preconditionner using pyAMG. Can I get some pointers on how to proceed ?

Many thanks in advance,

Gabriel

@bensworth
Copy link
Collaborator

So AMG is really only suitable for sparse matrices. Is your matrix sparse? Typically I think spectral type methods lead to relatively dense matrices.

@gabrielfougeron
Copy link
Author

No, it's not sparse, even though it's wrapped up in a scipy.sparse.linalg.LinearOperator for efficient O(n ln(n)) matrix vector multiply.

Can I get more insight into how "AMG is really only suitable for sparse matrices" ?

@bensworth
Copy link
Collaborator

There might be cases where dense would make sense, but in the AMG setting you construct an algebraic coarse grid RAP. If R and P end up relatively dense too, then just forming RAP will cost O(n^3), which is the same as computing the inverse directly. The other question when you get into matrices that do not arise from FEM/FD type discretizations is do the AMG assumptions of what kind of modes relaxation and coarse grid correction can attenuate still make sense. E.g., in a kernel operator, standard pointwise relaxation will typically attenuate the geometrically smoothest modes, but then neither standard relaxation nor coarse grid correction will attenuate high frequency error. This is just one example. Feel free to email me more specific details on your problem and I might be able to help more.

@bensworth
Copy link
Collaborator

Also, the nested space comment you mention is indeed relevant for multigrid, but less algebraic multigrid, which is what we support here. In AMG you construct your own coarse space, but if you have a natural nested set of spaces you might be able to do a geometric multigrid method, where you "rediscretize" on the coarse grid using the natural coarse space

@gabrielfougeron
Copy link
Author

There might be cases where dense would make sense, but in the AMG setting you construct an algebraic coarse grid RAP. If R and P end up relatively dense too, then just forming RAP will cost O(n^3), which is the same as computing the inverse directly.

Also, the nested space comment you mention is indeed relevant for multigrid, but less algebraic multigrid,

==> I think you are correct: my problem is probably more amenable to vanilla multigrid. Indeed, I can define R and P myself, and they would be sparse matrices.

Would it be possible to feed those to pyAMG and get back some kind of preconditionner?

@bensworth
Copy link
Collaborator

Yea I think so, it would take a little bit of custom tinkering though. You would need to take a look at one of the files that constructs an AMG hierarchy, e.g., classical/classical/ruge_stuben_solve.py and make a similar script that constructs a MultilevelSolver object with the relevant level operators, transfer operators, and choice of smoothing. From here you should be able to use PyAMG relaxation routines (depending on your operator structure) and the PyAMG solve/recursion support. I suggest the first thing to figure out is if relaxation smooths your error. Take a linear system, give it a zero right-hand side and random initial guess. If you do something like 10 Jacobi or Gauss-seidel iterations, does the solution become geometrically smooth? If it does you're on the right track. If it does not, that's where you need to start, before worrying about constructing a full hierarchy.

@gabrielfougeron
Copy link
Author

Thank you so very much for the pointers! I'll try that out and come back to you (probably in a few weeks).

@gabrielfougeron
Copy link
Author

@bensworth : I sent a follow up email on this topic to your lanl box.

@gabrielfougeron
Copy link
Author

I'd be infinitely grateful if you could give me a hand on that topic.
Pretty please ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants