You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On Lumi (same architecture as Frontier) I am observing a significant performance regression (2x slower) when linking to Ascent (not even calling any function) in AthenaPK/Parthenon.
I looked at some profiling data and it seems that compute kernels are not affected and that the performance regression stems stem waiting for communication between ranks to finish (so it's independent of any startup cost, IO, ...).
Has anyone observed sth similar?
Does anything special happen to MPI when Ascent is being linked in or to the runtime configuration?
Wow, that is a very unfortunate bug. I'm sorry you're experiencing that.
The only thing I've experienced that comes close is a case of linking Ascent with an AMR code that resulted in a detrimental memory leak. It turns out that if they tailored Ascent's build more to the AMR build specification, then they got things working correctly, but I don't have many details.
@cyrush can chime in next week, when he's back, if he recalls anything like this happening before.
On Lumi (same architecture as Frontier) I am observing a significant performance regression (2x slower) when linking to Ascent (not even calling any function) in AthenaPK/Parthenon.
I looked at some profiling data and it seems that compute kernels are not affected and that the performance regression stems stem waiting for communication between ranks to finish (so it's independent of any startup cost, IO, ...).
Has anyone observed sth similar?
Does anything special happen to MPI when Ascent is being linked in or to the runtime configuration?
My software env on Lumi looks like:
And here are the libraries being linked for a build without Ascent:
and with Ascent:
The text was updated successfully, but these errors were encountered: