New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement kspace_modify collective yes in KOKKOS package #4143
base: develop
Are you sure you want to change the base?
Implement kspace_modify collective yes in KOKKOS package #4143
Conversation
@hagertnl this wasn't actually being used in |
I tested this on 40 MPI ranks on CPU and on 4 V100 GPUs and it seems to give the correct numerics, nice work @hagertnl. Really curious what your scaling analysis will show, especially at large-scale. |
I figured that out when my performance numbers were turning out to be the exact same, hah! I still need to tie in the Preliminary, at 512 nodes (4096 ranks), I see about a 13% speedup, with correct answers. I'd like to do a quick round of optimization, I think the setup is doing double the work that it needs to for the collectives, creating excessive overhead for the collective approach. Not surprisingly, small scale did not see any benefit, and saw some performance regression in the case I saw. This is why I wanted to do a round of optimization, it seems like there may be excessive overhead. |
Here are some preliminary numbers from Frontier, 8 ranks per node (MPI timings from rank 1):
Some of these jobs hit network timeouts, and these timings aren't averaged, so these numbers may change slightly run-to-run. These runs are also without the patch from #4133 , so attempting to draw conclusions across job sizes is tricky. The conclusions I draw from this currently are:
@stanmoore1 currently, the collective code for kokkos and non-kokkos PPPM are fairly apples-to-apples. From your end, would you prefer any of the following, with respect to the scope of work in this PR? (1) finish verifying the correctness of the implementation we have now, then merge, optimizing both kokkos and non-kokkos PPPM collectives in a new PR later, (2) optimize just kokkos PPPM now (diverging the Kokkos implementation of remap from non-Kokkos), (3) optimize kokkos & non-kokkos PPPM collectives now, or something else? |
@hagertnl it is odd that the 512 node case is slower than 4096. Do these results include the patch with better load balancing by not using full xy planes, like we saw previously?
This is mostly up to you and how much time you have to work on this. If you have more time it would be great to optimize more, otherwise we can merge now as-is. |
This does not include the patch for better load balancing. Is that something you'd like me to put in this PR? I thought it might've been in another PR already. |
I think the load balancing issue is confounding the |
Sounds good. I'll apply the patch in this PR and try out some optimization. |
…t passing. Beginning optimization of collective
…p memory allocations
This is close to what I hope is the final version. I am currently running scaling studies on Frontier & Summit to test. At small scale, pt2pt is 10%+ faster than collective, but that's to be expected. I'm very interested what happens at 512+ nodes. Note -- I replaced a ~100-line section of code that looked for what other ranks needed to communicate for commringlist with a single Allreduce. I assume that if the user wants to use Alltoallv, then the Allreduce should be performant on the platform as well, and it saves running a nested for-loop that gets very expensive at scale. I did some testing of this vs the old 100-line section of code and it is performance neutral for 1-64 nodes. I did not have a chance to run at 512+, but unless the Allreduce is prohibitively expensive, I don't see a reason that the local commringlist building would be faster or preferred. |
I agree with this. |
In profiling (to validate message sizes), I noticed that ranks send themselves a significant amount of data. For example, in the below Alltoallv call, there were about 746 MB sent total by the MPI_Alltoallv, and half of that -- 373 MB -- was sent to the current process:
I am thinking it is worth checking the performance impact of implementing |
Interesting, yes that would be good to check. |
Summary
Implements the
kspace_modify collective yes
option in the KOKKOS package with KSPACE enabled.Related Issue(s)
closes #4140
Author(s)
Nick Hagerty, Oak Ridge National Laboratory
Licensing
By submitting this pull request, I agree, that my contribution will be included in LAMMPS and redistributed under either the GNU General Public License version 2 (GPL v2) or the GNU Lesser General Public License version 2.1 (LGPL v2.1).
Backward Compatibility
No change in backwards compatibility
Implementation Notes
A simple water box with PPPM enabled was run on 1 and 8 ranks on a single node, and on 512 ranks, 4096 ranks, 32K ranks for scaling analysis.
Post Submission Checklist
Further Information, Files, and Links