New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mca_pml_ob1_recv_frag_callback_match occasional segfault #12495
Comments
rebuilt slurm and openmpi to use pmi 4.2.9 and dropped the PMIX_MCA_gds=hash setting . Ran ~3000 hello_world jobs in this environment without seeing any core dumps. |
Just to be clear: you originally said you ran 3000 jobs with I fail to see a connection between PMIx and ob1/recv being caught in a segfault - we don't have anything to do with the MPI message exchange. Likewise, it's hard to see what |
My apologies - blasted github had me logged into a different account when I wrote the above note. Sigh. |
no worries - thanks for taking a look at this for me. Yep, the new testing using an srun launch with a PMIx 4.2.9 based slurm/openmpi did not see any core dumps in ~3000 runs. I'll stick with this new setup for now since things seem happier. If you can think of any env variables I can set to provide more debug information, please let me know and I can give them a try and report back what I find. |
Gave this some thought - given that things work fine under It still feels to me like there is something else in your environment causing the problem (and the PMIx change being just a canary or flat out red herring), but minus more info, I have no idea how to pursue it. |
one last note to add here before closing this one out and turning my focus to the Slurm/SchedMD side of the house. Two interesting things:
Turns out that I had disabled cgroups in my testing area earlier and forgotten about it. My comments above about PMIx impacting this issue should be ignored. Much more likely the change in my slurm configuration in my test environment that changed the launch behavior. |
You may already know this, but be aware that SchedMD changed the |
@bhendersonPlano If this issue is not in OMPI rather SLURM or PMIX can you please file in corresponding community and close here? |
I've started a thread on the slurm-users mailing list - hopefully someone will chime in there. I'll close this one out as it does not appear to be an OpenMPI issue. |
Background information
What version of Open MPI are you using? (e.g., v4.1.6, v5.0.1, git branch name and hash, etc.)
5.0.3
Describe how Open MPI was installed (e.g., from a source/distribution tarball, from a git clone, from an operating system distribution package, etc.)
self compiled with hwloc (2.10.0), pmix (5.0.2), and slurm (23.11.05)
Please describe the system on which you are running
Details of the problem
We are running 8 node jobs with 8 ranks per node and seeing an occasional segmentation fault during MPI_Init. It appears to impact some number of ranks on a single host when it happens. Sometimes it is just one rank that aborts but we've seen as many as 6 - all on the same node. We are using srun for launch with the environment variable PMIX_MCA_gds=hash as a workaround for another issue.
Stacktrace shows:
The system core file size limit is set to unlimited, but I didn't find any core files lying around.
I tried some experiments this afternoon and ran 1000 back to back hello_mpi jobs using srun launch - 6 of them hit this issue. I then ran over 3000 salloc + mpirun hello_mpi jobs and didn't see the issue.
Thoughts on next steps in debugging this issue? Maybe I should consider dropping back to pmix 4.2.9 and see how that goes?
The text was updated successfully, but these errors were encountered: