Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow boundary nodes to move in the Laplace Mesh Smoother #3399

Draft
wants to merge 7 commits into
base: devel
Choose a base branch
from

Conversation

dschwen
Copy link
Member

@dschwen dschwen commented Sep 13, 2022

This is work in progress, but I want to check if I'm already breaking stuff before continuing.

Closes #3385

@dschwen dschwen marked this pull request as draft September 13, 2022 22:37
@dschwen dschwen changed the title WIP: Allow boundary nodes to move in the Laplace Mesh Smoother Allow boundary nodes to move in the Laplace Mesh Smoother Sep 13, 2022
@dschwen dschwen force-pushed the smooth_3385 branch 3 times, most recently from 56bc7a3 to 1e1c25e Compare September 14, 2022 16:43
@moosebuild
Copy link

Job Test MOOSE mac on 1e1c25e : invalidated by @roystgnr

Kicking a mac CI intermittent Civet problem there

@moosebuild
Copy link

moosebuild commented Sep 14, 2022

Job Coverage on 412c777 wanted to post the following:

Coverage

f29b2d #3399 412c77
Total Total +/- New
Rate 56.26% 56.30% +0.03% 98.55%
Hits 44952 45013 +61 68
Misses 34945 34944 -1 1

Diff coverage report

Full coverage report

This comment will be updated on new commits.

@dschwen dschwen marked this pull request as ready for review September 15, 2022 04:20
@dschwen
Copy link
Member Author

dschwen commented Sep 15, 2022

Needs some more testing.

@dschwen dschwen force-pushed the smooth_3385 branch 9 times, most recently from 3a0c628 to 2711276 Compare September 18, 2022 02:42
@dschwen
Copy link
Member Author

dschwen commented Sep 18, 2022

Hm, there is a small diff on one node in parallel. That happens for movable boundary nodes an non-movable boundary nodes. It seems to me that the Laplace smoother wasn't tested before either (maybe in MOOSE). I'm trying to debug this.

Is there a way to run a single unit test? The libmesh tests take quite long.

@dschwen
Copy link
Member Author

dschwen commented Sep 18, 2022

In other infuriating news. The new unit test isn't even run when I build and make check locally. Yet it is run on Civet. That makes debugging pretty tedious.

@jwpeterson
Copy link
Member

Is there a way to run a single unit test?

You can run the unit_tests-opt executable directly and pass --re Foo on the command line to run only tests with names matching a regular expression.

@roystgnr
Copy link
Member

You can run the unit_tests-opt executable directly and pass --re Foo on the command line

I kind of like LIBMESH_OPTIONS='--re Foo' make check -C tests/, personally. Runs everything in your $METHODS that way, and remembers to rebuild unit_tests-* first in case you forgot.

The new unit test isn't even run when I build and make check locally.

Probably you've got a build without CppUnit? When you configure libMesh looks for cppunit-config in your $PATH or for a CppUnit already in your compiler paths, but if you have CppUnit installed without either of those things then you need to pass --with-cppunit-include=/foo --with-cppunit-lib=/bar to libMesh configure to specify where to find it.

@dschwen
Copy link
Member Author

dschwen commented Sep 19, 2022

The unit test failed in parallel until I skipped partitioning. In MOOSE the same test (calling the smoother through the MOOSE API) never failed in parallel. I cannot see the difference though. MOOSE does not skip partitioning. Is there a communication step I'm missing?

@dschwen
Copy link
Member Author

dschwen commented Sep 20, 2022

Moose module failure is unrelated

@roystgnr
Copy link
Member

Does it fail repeatedly in parallel in the unit test, or intermittently? Does it fail in every parallel CI recipe, or just some subset (e.g. DistributedMesh) of them?

@moosebuild
Copy link

Job Test MOOSE mac on cf06771 : invalidated by @roystgnr

Stochastic Tools failed stochastically let''s see if we can get the whole board green

@roystgnr
Copy link
Member

roystgnr commented Sep 20, 2022

Looks like all your unit test invocations are hardcoding DistributedMesh? You should add a ReplicatedMesh invocation of each and see if those work (and keep the extra invocations, for better test coverage in the future); if so then I'd guess that's why you can't replicate the problem through MOOSE.

I'd be very surprised if the mesh smoother classes worked on a truly distributed mesh; that's all older code, and I don't recall anyone ever updating it to work distributed. We might need to throw a MeshSerializer in there for the near term.

@dschwen
Copy link
Member Author

dschwen commented Sep 20, 2022

Looks like all your unit test invocations are hardcoding DistributedMesh? You should add a ReplicatedMesh invocation of each and see if those work (and keep the extra invocations, for better test coverage in the future);

Yeah, same deal with ReplicatedMesh. That's what I had in there at first. I can easily double those tests up to check both. That's why it's templated. I'm assuming that without the partitioning both mesh types should behave the same?

@dschwen
Copy link
Member Author

dschwen commented Sep 20, 2022

Does it fail repeatedly in parallel in the unit test, or intermittently? Does it fail in every parallel CI recipe, or just some subset (e.g. DistributedMesh) of them?

All parallel unit tests consistently. The stochastic tools test failure is also - ironically - deterministic. There was a PR to fix that failure (not sure it it percolated through the branches yet).

Update: it seems like it has.

@roystgnr
Copy link
Member

It's possible to see differences even without partitioning, depending on how the meshes are generated. For any mesh generation code that adds new unpartitioned objects but doesn't impose an explicit numbering, the implicitly assigned numbering ends up being sequential on ReplicatedMesh (because every rank knows what every other rank is doing, so that's safe) but taking strides of length (Nproc+1) on DistributedMesh (because even if my rank is only adding unpartitioned-hence-serialized objects, I have to leave some space available for my neighbors to also add new local objects that I can't see, if they wish). I haven't looked in detail at your code or at the code you're modifying, but mistakenly assuming that the maximum elem id == the number of elements is a common bug that tend to get tripped in DistributedMesh runs more easily than in ReplicatedMesh runs, for that reason.

But if it's not the replicated/distributed difference making the distinction, then I don't know what's going on that could have MOOSE runs working and unit tests failing.

@dschwen
Copy link
Member Author

dschwen commented Sep 28, 2022

Is anything else needed here?

Copy link
Member

@jwpeterson jwpeterson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any pictures of the 3D test cases?

I also think it would be good if you could add some documentation about the checks you are doing in the has_zero_curvature() function.

src/mesh/mesh_tools.C Outdated Show resolved Hide resolved
src/mesh/mesh_tools.C Outdated Show resolved Hide resolved
@dschwen
Copy link
Member Author

dschwen commented Sep 28, 2022

Any pictures of the 3D test cases?

Those are harder to visualize, but check out the exodus gold files.

@dschwen
Copy link
Member Author

dschwen commented Sep 28, 2022

Oh dear, hang on... I think something went horribly wrong...

@dschwen dschwen marked this pull request as draft September 28, 2022 15:17
@jwpeterson
Copy link
Member

Oh dear, hang on... I think something went horribly wrong...

Can you elaborate? I guess you will let us know when this is ready again by dropping the Draft status on the PR...

@jwpeterson
Copy link
Member

I made some screenshots of the 3D case. Since the input mesh is a cube with all flat sides, I would not expect the mesh smoother which allows boundary node mesh movement to be "allowed" to move any boundary nodes that appear on the "edges/corners" of the cube, only those that live on the "faces" of the boundary, and then only in the planes defined by those faces. I guess the problem is that the "curvature" here is determined to be zero at the edge/corner nodes, so they are allowed to move?

So I would say this is not really working in the 3D case yet... what are your thoughts?

3D input mesh:
smooth3d_in
3D output mesh:
smooth3d_out

@roystgnr
Copy link
Member

roystgnr commented Oct 3, 2022

Is anything else needed here?

Sorry to vanish on you last week. My one concern is the failures on partitioned meshes. Running on a distributed mesh is kind of a special use case, but running on a partitioned mesh is a requirement for most users, no? And if we don't know why it's failing there, then can we even be confident that it's just a bug with the new feature, not a regression of previous smoother behavior too?

@dschwen
Copy link
Member Author

dschwen commented Oct 6, 2022

The "oh dear" was my reaction to visualizing the 3D result, which is obviously broken, but worked intermediately. Then I changed something and mindlessly regolded :-(. I was on leave last week and will get back to this.

@dschwen
Copy link
Member Author

dschwen commented Nov 15, 2022

Hm, I think I'd better scale back the scope of this PR to 2D only, if that is acceptable. The 3D case is a bit more complicated than what I anticipated. To do this right there have to be an additional condition which is node sliding along edges if all connecting elements are "flat".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Permit mesh smoothers to move boundary/interface nodes
4 participants