New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Print global number of cells and dofs #1865
base: main
Are you sure you want to change the base?
Conversation
Review checklistThis checklist is meant to assist creators of PRs (to let them know what reviewers will typically look for) and reviewers (to guide them in a structured review process). Items do not need to be checked explicitly for a PR to be eligible for merging. Purpose and scope
Code quality
Documentation
Testing
Performance
Verification
Created with ❤️ by the Trixi.jl community. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1865 +/- ##
==========================================
- Coverage 96.30% 96.12% -0.19%
==========================================
Files 440 440
Lines 35793 35800 +7
==========================================
- Hits 34470 34410 -60
- Misses 1323 1390 +67
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
ncells was used elsewhere and has to be the local number
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! In #1616, you mentioned that the analysis callback also prints only the local information. Does this PR fix this as well?
""" | ||
ndofsglobal(mesh, solver, cache) | ||
|
||
Return the global number of degrees of freedom associated with each scalar variable. | ||
Defaults to ndofs when there is no special implementation for parallel computations. | ||
""" | ||
@inline function ndofsglobal(mesh, solver, cache) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
""" | |
ndofsglobal(mesh, solver, cache) | |
Return the global number of degrees of freedom associated with each scalar variable. | |
Defaults to ndofs when there is no special implementation for parallel computations. | |
""" | |
@inline function ndofsglobal(mesh, solver, cache) | |
""" | |
ndofsglobal(mesh, solver, cache) | |
Return the global number of degrees of freedom associated with each scalar variable. | |
Defaults to [`ndofs`](@ref) when there is no special implementation for | |
MPI-parallel computations. | |
""" | |
@inline function ndofsglobal(mesh, solver, cache) |
But I'm not sure whether we should better turn this into a comment to avoid making this part of our official API. From my point of view, the methods accepting a semidiscretization are fine but this one is more like an implementation detail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure about it.
My concern was that with ndofsglobal(semi::AbstractSemidiscretization)
introduced above ndofsglobal(mesh, solver, cache)
might get called with some mesh, solver, cache combinations for which it does not exist.
If I see it correctly, there is only
@inline function ndofsglobal(mesh, dg::DG, cache)
in dg.jl
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's you opinion, @sloede?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd consider it an implementation detail as well.
However, does it really need a fallback? Or, alternatively, could we check for mpi_isparallel() == 0
and otherwise error, such that a user cannot accidentally use that with a non-parallelized mesh?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd consider it an implementation detail as well.
Ok! However I am afraid I do not understand what this implies.
However, does it really need a fallback?
I added ndofsglobal(semi)
to semidiscretization_hyperbolic.jl, which calls ndofsglobal(mesh, solver, cache)
. So when solver
is not DG
there should be a MethodError, shouldn't there?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, the docstring! Thanks!
The problem is that I added ndofsglobal
to
function Base.show
with the idea that always the global number of dofs gets printed, with ndofs as fallback in case there in no ndofsglobal
.
An alternative could be to introduce a separate function just for printing, say print_ndofs
, which would then check for mpi_isparallel()
or have a default fallback.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this resolved after b883b3a?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It resolves the API thing:
Our current policy was that if it has a docstring, we consider it part of our API and then changing its API or its behavior would be a breaking change. Thus we don't add docstring to purely internal functions.
Otherwise this PR got stuck on the question of how to print the global number of dofs on Trixi's startup screen for a mesh which does not have a parallel implementation and thus nothing like ndofsglobal
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm a little confused and can't wrap my head around all the implications. Instead of writing back and forth, can we maybe discuss this at the next meeting (if that's not too late for you @benegee)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be a safe way out of this if we were to implement a fallback that checks mpi_isparallel()
and errors if it is true? That is, if someone implements a mesh that is running in parallel but does not implement this function gets it thrown in the face?
Co-authored-by: Hendrik Ranocha <ranocha@users.noreply.github.com>
What about other mesh types like the |
The |
@benegee Please note that you should also adapt the output of the AMR output, probably in these three functions: Trixi.jl/src/callbacks_step/analysis.jl Line 494 in 8a9fc7b
Trixi.jl/src/callbacks_step/analysis.jl Line 520 in 8a9fc7b
Trixi.jl/src/callbacks_step/analysis.jl Line 554 in 8a9fc7b
Otherwise we get a global element count but only rank-0 information on AMR, which is bound to cause confusion IMHO |
True! I realized this in the meantime as well, but have not finished the MPI syncing of element counts. |
Optimally, you'll use an implementation that only requires a single additional |
@@ -105,7 +106,7 @@ function Base.show(io::IO, ::MIME"text/plain", mesh::P4estMesh) | |||
else | |||
setup = [ | |||
"#trees" => ntrees(mesh), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the number of trees a local or a global information? If local --> please change to global
@@ -91,7 +92,7 @@ function Base.show(io::IO, ::MIME"text/plain", mesh::T8codeMesh) | |||
else | |||
setup = [ | |||
"#trees" => ntrees(mesh), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here
@@ -310,7 +310,7 @@ function (analysis_callback::AnalysisCallback)(u_ode, du_ode, integrator, semi) | |||
mpi_println(" " * " " * | |||
" " * | |||
" PID: " * @sprintf("%10.8e s", performance_index)) | |||
mpi_println(" #DOFs per field:" * @sprintf("% 14d", ndofs(semi)) * | |||
mpi_println(" #DOFs per field:" * @sprintf("% 14d", ndofsglobal(semi)) * |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please check (if not yet done) that all other values that are printed by the analysis callback are also for the global problem and not rank local, otherwise people (especially me 😅) will get confused
""" | ||
ndofsglobal(semi::AbstractSemidiscretization) | ||
|
||
Return the global number of degrees of freedom associated with each scalar variable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Return the global number of degrees of freedom associated with each scalar variable. | |
Return the global number of degrees of freedom associated with each scalar variable across all MPI ranks. |
Just to clarify what is meant by "global"
@inline function ndofsglobal(semi::SemidiscretizationCoupled) | ||
sum(ndofsglobal, semi.semis) | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@inline function ndofsglobal(semi::SemidiscretizationCoupled) | |
sum(ndofsglobal, semi.semis) | |
end | |
""" | |
ndofsglobal(semi::SemidiscretizationCoupled) | |
Return the global number of degrees of freedom associated with each scalar variable across all MPI ranks, and summed up over all coupled systems. | |
This is the same as [`ndofs`](@ref) for simulations running in serial or | |
parallelized via threads. It will in general be different for simulations | |
running in parallel with MPI. | |
""" | |
@inline function ndofsglobal(semi::SemidiscretizationCoupled) | |
sum(ndofsglobal, semi.semis) | |
end |
@@ -314,7 +314,7 @@ function Base.show(io::IO, ::MIME"text/plain", semi::SemidiscretizationHyperboli | |||
|
|||
summary_line(io, "source terms", semi.source_terms) | |||
summary_line(io, "solver", semi.solver |> typeof |> nameof) | |||
summary_line(io, "total #DOFs per field", ndofs(semi)) | |||
summary_line(io, "total #DOFs per field", ndofsglobal(semi)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also make sure here that there are no rank-local quantities printed here
""" | ||
ndofsglobal(mesh, solver, cache) | ||
|
||
Return the global number of degrees of freedom associated with each scalar variable. | ||
Defaults to ndofs when there is no special implementation for parallel computations. | ||
""" | ||
@inline function ndofsglobal(mesh, solver, cache) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be a safe way out of this if we were to implement a fallback that checks mpi_isparallel()
and errors if it is true? That is, if someone implements a mesh that is running in parallel but does not implement this function gets it thrown in the face?
Resolves #1616