You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using the mpich build 'mpich/20231026/icc-all-pmix-gpu' on sunspot and using darshan and vtune for performance analysis I am seeing what appears to be very bad performance in the messaging layer for the ROMIO collective buffering aggregation. I am usng the HDF5 h5bench exerciser benchmark which uses collective MPI-IO for the backend. This is just on 1 node so just intra-node communication, looking at darshan for example using 2 ranks I see:
Time is in seconds, the total mpi-io time is 0.79 sec and within that the posix (lustre io) time is only 0.27 sec to write and then 0.10 sec to read (if doing rmw) so the delta is most likely messaging layer, and with 16 ranks it gets much worse:
So for 16 ranks the ratio is alot higher for the messaging layer time in mpiio. HDF5 is using collective mpi-io aggregation so there is a POSIX section which has all the times for the actual lustre filesystem interaction and then an MPIIO section with times that include all the messaging and the POSIX time, so taking the delta between them roughly gives the messaging time for aggregation. With Vtune I can see most all the time for mpi-io writing (MPI_File_write_at_all) is in ofi. So for 1 node 16 ranks the question is for MPIIO out of 37.61 secodes only 2.6 seconds are spent writing to lustre leaving over 35 seconds doing what I presume is mpi communication for the aggregation. To reproduce on sunspot running against lustre (gila):
Start interactive job on 1 node: qsub -lwalltime=60:00 -lselect=1 -A Aurora_deployment -q workq -I
Using the mpich build 'mpich/20231026/icc-all-pmix-gpu' on sunspot and using darshan and vtune for performance analysis I am seeing what appears to be very bad performance in the messaging layer for the ROMIO collective buffering aggregation. I am usng the HDF5 h5bench exerciser benchmark which uses collective MPI-IO for the backend. This is just on 1 node so just intra-node communication, looking at darshan for example using 2 ranks I see:
Time is in seconds, the total mpi-io time is 0.79 sec and within that the posix (lustre io) time is only 0.27 sec to write and then 0.10 sec to read (if doing rmw) so the delta is most likely messaging layer, and with 16 ranks it gets much worse:
So for 16 ranks the ratio is alot higher for the messaging layer time in mpiio. HDF5 is using collective mpi-io aggregation so there is a POSIX section which has all the times for the actual lustre filesystem interaction and then an MPIIO section with times that include all the messaging and the POSIX time, so taking the delta between them roughly gives the messaging time for aggregation. With Vtune I can see most all the time for mpi-io writing (MPI_File_write_at_all) is in ofi. So for 1 node 16 ranks the question is for MPIIO out of 37.61 secodes only 2.6 seconds are spent writing to lustre leaving over 35 seconds doing what I presume is mpi communication for the aggregation. To reproduce on sunspot running against lustre (gila):
Start interactive job on 1 node:
qsub -lwalltime=60:00 -lselect=1 -A Aurora_deployment -q workq -I
Then to get the darshan text file run this:
The text was updated successfully, but these errors were encountered: