Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance issue in ROMIO collective buffer aggregation for Parallel HDF5 on sunspot #6984

Open
pkcoff opened this issue Apr 18, 2024 · 0 comments

Comments

@pkcoff
Copy link
Contributor

pkcoff commented Apr 18, 2024

Using the mpich build 'mpich/20231026/icc-all-pmix-gpu' on sunspot and using darshan and vtune for performance analysis I am seeing what appears to be very bad performance in the messaging layer for the ROMIO collective buffering aggregation. I am usng the HDF5 h5bench exerciser benchmark which uses collective MPI-IO for the backend. This is just on 1 node so just intra-node communication, looking at darshan for example using 2 ranks I see:

POSIX   -1      14985684057340396765    POSIX_F_READ_TIME       0.101752        /lus/gila/projects/Aurora_deployment/pkcoff/run/h5bench/rundir8991806/hdf5TestFile-844165987    /lus/gila       lustre
POSIX   -1      14985684057340396765    POSIX_F_WRITE_TIME      0.272666        /lus/gila/projects/Aurora_deployment/pkcoff/run/h5bench/rundir8991806/hdf5TestFile-844165987    /lus/gila       lustre
MPI-IO  -1      14985684057340396765    MPIIO_F_WRITE_TIME      0.797941        /lus/gila/projects/Aurora_deployment/pkcoff/run/h5bench/rundir8991806/hdf5TestFile-844165987    /lus/gila       lustre

Time is in seconds, the total mpi-io time is 0.79 sec and within that the posix (lustre io) time is only 0.27 sec to write and then 0.10 sec to read (if doing rmw) so the delta is most likely messaging layer, and with 16 ranks it gets much worse:

POSIX   -1      4672546656109652293     POSIX_F_READ_TIME       0.774221        /lus/gila/projects/Aurora_deployment/pkcoff/run/h5bench/rundir8991811/hdf5TestFile-1491544850   /lus/gila       lustre
POSIX   -1      4672546656109652293     POSIX_F_WRITE_TIME      1.827263        /lus/gila/projects/Aurora_deployment/pkcoff/run/h5bench/rundir8991811/hdf5TestFile-1491544850   /lus/gila       lustre
MPI-IO  -1      4672546656109652293     MPIIO_F_WRITE_TIME      37.605015       /lus/gila/projects/Aurora_deployment/pkcoff/run/h5bench/rundir8991811/hdf5TestFile-1491544850   /lus/gila       lustre

So for 16 ranks the ratio is alot higher for the messaging layer time in mpiio. HDF5 is using collective mpi-io aggregation so there is a POSIX section which has all the times for the actual lustre filesystem interaction and then an MPIIO section with times that include all the messaging and the POSIX time, so taking the delta between them roughly gives the messaging time for aggregation. With Vtune I can see most all the time for mpi-io writing (MPI_File_write_at_all) is in ofi. So for 1 node 16 ranks the question is for MPIIO out of 37.61 secodes only 2.6 seconds are spent writing to lustre leaving over 35 seconds doing what I presume is mpi communication for the aggregation. To reproduce on sunspot running against lustre (gila):

Start interactive job on 1 node: qsub -lwalltime=60:00 -lselect=1 -A Aurora_deployment -q workq -I

cd /lus/gila/projects/Aurora_deployment/pkcoff/tarurundir
module unload mpich/icc-all-pmix-gpu/52.2
module use /soft/preview-modulefiles/24.086.0
module load mpich/20231026/icc-all-pmix-gpu
export ROMIO_HINTS=/lus/gila/projects/Aurora_deployment/pkcoff/tarurundir/romio_hints
export MPIR_CVAR_ENABLE_GPU=1
export MPIR_CVAR_BCAST_POSIX_INTRA_ALGORITHM=mpir
export MPIR_CVAR_ALLREDUCE_POSIX_INTRA_ALGORITHM=mpir
export MPIR_CVAR_BARRIER_POSIX_INTRA_ALGORITHM=mpir
export MPIR_CVAR_REDUCE_POSIX_INTRA_ALGORITHM=mpir
unset MPIR_CVAR_CH4_COLL_SELECTION_TUNING_JSON_FILE
unset MPIR_CVAR_COLL_SELECTION_TUNING_JSON_FILE
unset MPIR_CVAR_CH4_POSIX_COLL_SELECTION_TUNING_JSON_FILE
export LD_LIBRARY_PATH=/lus/gila/projects/Aurora_deployment/pkcoff/tarurundir:/soft/datascience/aurora_nre_models_frameworks-2024.0/lib/
export FI_PROVIDER=cxi
export FI_CXI_DEFAULT_CQ_SIZE=131072
export FI_CXI_CQ_FILL_PERCENT=20
export FI_MR_CACHE_MONITOR=disabled
export FI_CXI_OVFLOW_BUF_SIZE=8388608
export DARSHAN_LOGFILE=/lus/gila/projects/Aurora_deployment/pkcoff/tarurundir/darshan.log
LD_PRELOAD=/lus/gila/projects/Aurora_deployment/pkcoff/tarurundir/libdarshan.so:/lus/gila/projects/Aurora_deployment/pkcoff/tarurundir/libhdf5.so:/lus/gila/projects/Aurora_deployment/pkcoff/tarurundir/libpnetcdf.so mpiexec -np 16 -ppn 16 --cpu-bind=verbose,list:4:56:5:57:6:58:7:59:8:60:9:61:10:62:11:63 --no-vni -envall -genvall /soft/tools/mpi_wrapper_utils/gpu_tile_compact.sh ./hdf5Exerciser --numdims 3 --minels 128 128 128 --nsizes 1 --bufmult 2 2 2 --metacoll --addattr --usechunked --maxcheck 100000 --fileblocks 128 128 128 --filestrides 128 128 128 --memstride 128 --memblock 128

Then to get the darshan text file run this:

./darshan-parser darshan.log > darshan.txt
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant