New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Let hp::Refinement::choose_p_over_h()
communicate future FE indices and refinement flags on all types of parallel Triangulation.
#16759
base: master
Are you sure you want to change the base?
Conversation
We should also exchange all h-refinement indicators with the function introduced in #16330 for |
I moved the A test is still missing. I will provide one once I find some time to construct a minimal example. I propose we wait with merging until we have a test.
This patch now makes the whole hp-refinement apparatus more robust by providing refinement flags and future FE indices on ghost cells. So independent of the partitioning of sibling cells, we get a more robust hp-decision algorithm. |
hp::Refinement::choose_p_over_h()
communicate future FE indices on all types of parallel Triangulation.hp::Refinement::choose_p_over_h()
communicate future FE indices and refinement flags on all types of parallel Triangulation.
My understand is that p4est ensures that sibling cells are owned by the same process if they are active. Clearly, that cannot be if the cells are not active -- for example, if you start with a single square and refine multiple times, the four siblings of the original square cannot be owned by the same process. At the same time, I have to admit that I'm not confident in my knowledge of p4est. |
include/deal.II/distributed/tria.h
Outdated
/** | ||
* Communicate refinement flags on ghost cells from the owner of the cell. | ||
* | ||
* This is necessary to get consistent refinement, as mesh/ smoothing would | ||
* undo some of the requested coarsening/refinement. | ||
*/ | ||
template <int dim, int spacedim> | ||
void | ||
exchange_refinement_flags( | ||
dealii::parallel::distributed::Triangulation<dim, spacedim> &tria) | ||
{ | ||
auto pack = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an expensive function. It doesn't have to live in the .h
file. Just forward declare it here and implement it in the .cc file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I moved it to the .cc file. Let me know if you want it placed somewhere else.
In addition to the assertion that triggered in
I will investigate further on this by creating additional tests. |
The assertion isn't going to work if you're on an active locally owned cell on level 0. Even then, I don't think this is right. I believe that p4est only keeps cells together if all children are active. As a counter-example, you could consider the case of a unit square once refined, with one of the four children refined once more. Your assertion would only succeed if the three active level-1 cells are owned by the same process, and the four level-2 cells also. I believe that the latter is true, but the former may not. |
What's the status here? |
…on all types of parallel Triangulation.
… subdomains in any parallel Triangulation.
Use it in choose_p_over_h().
c15ede0
to
6e3e42c
Compare
I would still like to come up with a minimal test that triggered the above assertion before this patch. With your comments, I have a good starting point to create a scenario. |
42fdeca
to
e8c04c8
Compare
OK, let me know once you have the test! |
In #11980 I've added the following assertion in the code. It was meant for parallel shared Triangulation as some error occurred with that type of Triangulation. I didn't notice it on other parallel Triangulation types at that time, assuming that sibling cells always belonged to the same subdomain for parallel distributed Triangulations.
dealii/source/hp/refinement.cc
Lines 752 to 759 in b80d0be
In my recent hp-related experiments, this assertion triggers when using parallel distributed Triangulation objects. So was my assumption wrong, or did that behavior change recently?
EDIT: A bit of context: I used checkpointing to run the particular scenario that triggered the assertion. I noticed for some time now that the mesh is partitioned slightly different after recovering from a checkpoint. After loading a mesh with the same number of processes, it is possible that siblings won't belong to the same subdomain, while they previously have been when the checkpoint was created. But I need to do more research on this hypothesis.
Anyways, this PR changes the behavior to communicate future FE indices on ghost cells for any type of parallel Triangulation.
I do not have a test yet, as I do not have a minimal example ready. I can try to come up with something if you think we need to.