Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poisson: hanging node constraints not enforced #650

Open
richardschu opened this issue Mar 8, 2024 · 8 comments · May be fixed by #651
Open

Poisson: hanging node constraints not enforced #650

richardschu opened this issue Mar 8, 2024 · 8 comments · May be fixed by #651

Comments

@richardschu
Copy link
Contributor

Solving a Poisson problem with hanging nodes I saw that the hanging node constraints are not enforced. Down below there is an example. It does not seem like there are no constraints enforced on the hanging nodes (I would guess the solution would then be closer to the expected value), but we are actually actively enforcing zero constraints (since we have clear zeroes enforced and otherwise it might just be some reasonably close value to the actual solution, system would still be solveable since this would be like an interior Neumann boundary on the refined interior edge).
Polynomial degree is 2 in both cases. Solver converges "fine": DG: 98, CG: 44 iters; ph geom multigtid + AMG coarse grid solver in both cases.
Any better ideas? Any hints where to start looking? I did not investigate yet and will not have time before GAMM, but could do that afterwards.
DG solution:
dg_solution

CG solution:
cg_solution

@richardschu
Copy link
Contributor Author

update on the DG iteration count: cph takes 33 iterations, reducing the residual by 1e8 from zero initial guess. That is reasonable I guess; I have only one global refinement level so the GMG hierarchy is relatively "flat", Chebyshev smoother, 5 smoothing sweeps.

@kronbichler
Copy link
Contributor

I must admit I do not yet understand the complete picture. We don't call AffineConstraints::distribute after solving the linear system because most of our data access is done via FEEvaluation that applies the constraints correctly from its internal code. Hence, I wonder if this isn't "only" a visualization issue? Can you try to call AffineConstraints::distribute() in the visualization function? Technically, it would be safer to call distribute right in the solver because every use that just reads of the vector values via dealii::DoFCellAccessor would run into trouble, but I am a bit concerned about the cost if we do it unconditionally, as the distribute function internally builds some index sets that are expensive especially for large-scale computations. Maybe we should look into the affine constraints function at some point to fix it there.

@richardschu
Copy link
Contributor Author

richardschu commented Mar 9, 2024

Thanks for taking a look.
What you suggest in your message is exactly what I did in the postprocessor manually in my code, just constructing and distributing the hanging node constraints again (I know this is too costly, but was a fast fix in my code 😆 if we fix it in ExaDG I am ofc happy to remove that part).
I think the problem might indeed be visualization only as in the Laplace operator after solving we call laplace_operator.set_inhomogeneous_boundary_values(sol) see here
We could put a affine_constraints_periodicity_and_hanging_nodes.distribute(sol); just after.
But then I am wondering, why this has not surfaced earlier, since periodic constraints would maybe appear in the same way, also not being enforced (in teh output) when using CG? Does no one run periodic boxed with CG and looks at the output?

@richardschu
Copy link
Contributor Author

just tested with the same example. that one line would fix it. I will make a PR, but to me it is still unclear, why this has not surfaced earlier ... anyone using periodic constraints or hanging nodes with CG would have encountered that. Am I overlooking something?

@kronbichler
Copy link
Contributor

We did not use continuous FEM very often yet; so in that sense this fix is very much appreciated.

@nfehn
Copy link
Member

nfehn commented Mar 12, 2024

The status of inspection here still seems to be "visualization only"

I think the problem might indeed be visualization only

Could you explain what you found out so far, so that I can better understand the problem?

@richardschu
Copy link
Contributor Author

Could you explain what you found out so far, so that I can better understand the problem?

The problem is in this application visualization only, if we ignore that the final solution vector we have has zeroes at the hanging nodes. If we want to compute some error in the postprocessor with a hanging node solution, we run into trouble though.

The problem is that the system is solved correctly, but the postprocessor gets a vector sol that does not have the hanging node constraints enforced. See the Poisson driver:

// solve linear system of equations
timer.restart();
iterations = pde_operator->solve(sol, rhs, 0.0 /* time */);
solve_time += timer.wall_time();
// postprocessing of results
timer.restart();
postprocessor->do_postprocessing(sol);

I would rather not construct the hanging node constraints via the DoFHandler again, but enforce them at the end of solve(), where we also enforced inhomogeneous BCs when using CG discretizations already:

https://github.com/exadg/exadg/blob/0075db2f58ea7837436db76207a0a0cfd413b9a0/include/exadg/poisson/spatial_discretization/operator.cpp#L497C1-L502C4

where we have that AffineConstraints object at hand. Here, I did not see any change in iteration counts, so I think this is really visualization. And the point where these constraints are enforced is somewhat free, but I would not put that in the Postprocessor, simply because it does not have the constraints and probably should not have.

@nfehn
Copy link
Member

nfehn commented Mar 12, 2024

In my opinion, adding these lines to where the solver is called is also inconsistent from a software design perspective. Because now, having added these lines around the solver call, it appears arbitrary who has to care about these constraints, who is responsible for imposing the constraints. We do not want to have an implementation where every "module"/"class" imposes the constraints for "safety reasons". I argue that we need a clear "single responsibility" design. Right now, I would prefer not having these lines of code on the "solver call" level. On the solver level, our implementation in ExaDG should be in line with / driven by dealii::MatrixFree in my opinion. This might imply that we need special routines or deviate from the dealii::MatrixFree procedure elsewhere. If I remember correctly we had similar issues with zeroing ghost values, dealt with differently for operator evaluation and for postprocessing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants