-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement kspace_modify collective yes in KOKKOS package #4143
base: develop
Are you sure you want to change the base?
Implement kspace_modify collective yes in KOKKOS package #4143
Conversation
@hagertnl this wasn't actually being used in |
I tested this on 40 MPI ranks on CPU and on 4 V100 GPUs and it seems to give the correct numerics, nice work @hagertnl. Really curious what your scaling analysis will show, especially at large-scale. |
I figured that out when my performance numbers were turning out to be the exact same, hah! I still need to tie in the Preliminary, at 512 nodes (4096 ranks), I see about a 13% speedup, with correct answers. I'd like to do a quick round of optimization, I think the setup is doing double the work that it needs to for the collectives, creating excessive overhead for the collective approach. Not surprisingly, small scale did not see any benefit, and saw some performance regression in the case I saw. This is why I wanted to do a round of optimization, it seems like there may be excessive overhead. |
Here are some preliminary numbers from Frontier, 8 ranks per node (MPI timings from rank 1):
Some of these jobs hit network timeouts, and these timings aren't averaged, so these numbers may change slightly run-to-run. These runs are also without the patch from #4133 , so attempting to draw conclusions across job sizes is tricky. The conclusions I draw from this currently are:
@stanmoore1 currently, the collective code for kokkos and non-kokkos PPPM are fairly apples-to-apples. From your end, would you prefer any of the following, with respect to the scope of work in this PR? (1) finish verifying the correctness of the implementation we have now, then merge, optimizing both kokkos and non-kokkos PPPM collectives in a new PR later, (2) optimize just kokkos PPPM now (diverging the Kokkos implementation of remap from non-Kokkos), (3) optimize kokkos & non-kokkos PPPM collectives now, or something else? |
@hagertnl it is odd that the 512 node case is slower than 4096. Do these results include the patch with better load balancing by not using full xy planes, like we saw previously?
This is mostly up to you and how much time you have to work on this. If you have more time it would be great to optimize more, otherwise we can merge now as-is. |
This does not include the patch for better load balancing. Is that something you'd like me to put in this PR? I thought it might've been in another PR already. |
I think the load balancing issue is confounding the |
Sounds good. I'll apply the patch in this PR and try out some optimization. |
…t passing. Beginning optimization of collective
…p memory allocations
This is close to what I hope is the final version. I am currently running scaling studies on Frontier & Summit to test. At small scale, pt2pt is 10%+ faster than collective, but that's to be expected. I'm very interested what happens at 512+ nodes. Note -- I replaced a ~100-line section of code that looked for what other ranks needed to communicate for commringlist with a single Allreduce. I assume that if the user wants to use Alltoallv, then the Allreduce should be performant on the platform as well, and it saves running a nested for-loop that gets very expensive at scale. I did some testing of this vs the old 100-line section of code and it is performance neutral for 1-64 nodes. I did not have a chance to run at 512+, but unless the Allreduce is prohibitively expensive, I don't see a reason that the local commringlist building would be faster or preferred. |
I agree with this. |
In profiling (to validate message sizes), I noticed that ranks send themselves a significant amount of data. For example, in the below Alltoallv call, there were about 746 MB sent total by the MPI_Alltoallv, and half of that -- 373 MB -- was sent to the current process:
I am thinking it is worth checking the performance impact of implementing |
Interesting, yes that would be good to check. |
Just a status update, currently I'm debugging an issue on Infiniband networks, where the NIC appears to be reporting that it's out of physical memory while doing the Alltoallv. Once I have that worked out, I'll do a quick scaling study with small & large system sizes on both Summit and Frontier. For separating communication to self out from MPI, I observed a small performance improvement at small node counts (still slower than pt2pt), but no observable improvement at large scale on Frontier. I was trying to test this on Summit when I hit the Infiniband error. I'll test both implementations on Frontier & Summit when I do the scaling study, and we can figure out which implementation we want based on the data. |
@hagertnl very interesting, thanks for working on this. |
Performance data from Frontier (Slingshot 11, Dragonfly topology) and Summit (Infiniband, Fat-tree topology) for small & large systems. Summit's "large" problem is 8x smaller per-node than Frontier's, due to HBM limits. The small problems are 8x smaller per-node than large for each system. In all cases, I ran 3 jobs and took the average of timesteps/sec, excluding obvious outliers. Regarding the issues at 4096 nodes I saw on Summit, it appears to be due to this: https://www.open-mpi.org/faq/?category=openfabrics#ib-locked-pages. Basically, we're just overrunning the resources available. There are some parameters we can tune to get it to work, but I don't know them off the top of my head. Smaller systems were able to run with collectives without a problem. |
Just pushed the source for the collective+self optimization. I still need to clean up the memory deallocations. The remap destructor is not organized like the allocations, and I think I'm probably allocating more arrays than necessary, or I can re-use some of the existing pt2pt variables. |
@hagertnl looks like the collective option can give significant speedup on Frontier, which is great. |
@stanmoore1 it's actually opposite (metric reported was timesteps/sec) -- it's quite a slow-down on Frontier at scale. But it is quite an improvement on full-system Summit (>2x faster for small systems). Frontier has a much higher message injection rate into the fabric due to the 4 NICs per node (only 1 NIC per node on Summit), so I'm not very surprised by these results. Also, cray-mpich is still being tuned for AMD GPU support, so some lesser-used collectives may not be fully optimized. Also note that Summit's pt2pt results are tuned with Infiniband network settings that give a ~50% speedup from default settings, so it's impressive that collectives still beat that. All that to say, I think that the benefit of these optimizations depends more heavily on the node design than I originally thought. If the NIC's injection rates are not limiting the MPI_Send's, then it's entirely possible that collectives won't be optimal. One interesting thing to try could be to implement MPI_Isend's instead of Sends by using the full send buffer as we do with collectives so that each rank has an independent portion of the buffer, but that's an addition for later, I think. |
Ah got it, interesting... |
…os.h for collective fields
I believe this code is ready to go then -- I looked at the docs to see if they need updates, but it doesn't look like it. What I currently have pushed is the implementation where the current processor's data is locally copied and is not sent through MPI. This was faster or comparable in all cases above, and is especially beneficial for small scale. |
@hagertnl thanks for all your effort on this. How beneficial do you think it would be to port the changes to Remap to non-Kokkos? (Doesn't need to be done by you or in this PR, just curious...). |
That's a good question... The main performance changes I could see is in that nested for-loop that locally builds the commringlist that I replaced with MPI_Allreduce, and it's hard to say -- what will be faster, an Allreduce with a lot of ranks, or a double-nested for-loop for a lot of iterations in each? I think as number of ranks goes up, the Allreduce should be faster. And since most processes send a lot of data to themselves, pulling that communication out of the Alltoallv into a local pack/unpack could be beneficial at all scales. It also would contribute to code maintainability, I think. There's a few spots that I felt like the logic was tricky to follow, I think the Kokkos implementation is more readable than the non-Kokkos version was. Mainly just getting larger conditional blocks for Given that there's an existing implementation, it would be neat to see if there's any performance difference from the changes! |
Summary
Implements the
kspace_modify collective yes
option in the KOKKOS package with KSPACE enabled.Related Issue(s)
closes #4140
closes #4133
Author(s)
Nick Hagerty, Oak Ridge National Laboratory
Licensing
By submitting this pull request, I agree, that my contribution will be included in LAMMPS and redistributed under either the GNU General Public License version 2 (GPL v2) or the GNU Lesser General Public License version 2.1 (LGPL v2.1).
Backward Compatibility
No change in backwards compatibility
Implementation Notes
A simple water box with PPPM enabled was run on 1 and 8 ranks on a single node, and on 512 ranks, 4096 ranks, 32K ranks for scaling analysis.
Post Submission Checklist
Further Information, Files, and Links