-
I'm trying to get mpi4py installed on a Linux HPC where the python comes from conda, but the MPI I want to use does not. Unfortunately, this leads to the need to install mpi4py with pip, which I gather from pervious issues isn't entirely well behaved when python comes from conda (but at least the
I get (among other lines)
instead of the path that's consistent with my
Is there any obvious way to figure out why it's linking to the wrong mpi library? If I run anyway I get a missing symbol at runtime, which isn't particular surprising if it built with the system openmpi but is running with the conda one, or vice versa. |
Beta Was this translation helpful? Give feedback.
Replies: 12 comments 18 replies
-
And I checked that the desired MPI's path is ahead of conda's in
Could the issue be that the conda path is explicitly added by the |
Beta Was this translation helpful? Give feedback.
-
I also tried to modify the rpath with I guess that at the |
Beta Was this translation helpful? Give feedback.
-
Sorry, I am not sure what you're trying to achieve. Let me ask this: Have you tried the "external MPI" trick that conda-forge designed to support HPC clusters? |
Beta Was this translation helpful? Give feedback.
-
I'm trying to get an mpi4py that uses the centrally installed MPI instead of the one that's installed as part of conda. Sounds like that trick you linked to should do exactly what I want, if I can get it to work (and I worry because the conda environment already installed at the HPC has "real" conda mpi packages installed). However, it's not working. I get
Are those "x.y.z" placeholders, or are they meant to be used literally, like I tried? |
Beta Was this translation helpful? Give feedback.
-
x.y.z should not be taken literally, it should be the version you need, for example replace it with 3.* to get MPICH 3 ( |
Beta Was this translation helpful? Give feedback.
-
@leofang Maybe we should convert this issue on a discussion under the Q&A category? |
Beta Was this translation helpful? Give feedback.
-
Is there any way to get conda to ignore the system-wide "real" mpich installation? I activated an environment and installed the "external" mpich version, as the hint suggested, but when I pip install mpi4py it still resolves the Or is there no way to do this with the centrally installed conda because that one already includes a real mpich, and I have to ignore it and install my own entire conda/python ecosystem (unless I can convince the sysadmins to replace the real mpich with the fake one)? |
Beta Was this translation helpful? Give feedback.
-
The MPI I want to use is not provided through conda, but the centrally installed conda does have a "real" mpich package installed. I've set aside for now my earlier attempt to install mpi4py using pip, and I tried to follow your suggestion for mpi4py via conda. However, now I'm getting a conda conflict. I ran your
followed by a few tens of lines of conflicts, listing many packages, and ending with
|
Beta Was this translation helpful? Give feedback.
-
OK, after doing
The install of mpi4py still fails with (I've dropped about a couple hundred copies of the same output line):
|
Beta Was this translation helpful? Give feedback.
-
@bernstei What happens if you just remove mpi4py and any MPI package from your environment, and then you just EDIT: Also what version of MPICH do you have in your HPC system? |
Beta Was this translation helpful? Give feedback.
-
It is, it is totally expected, as it is the SONAME MPICH uses for its libraries. It would be a very bad idea to change it. There is trivial workaround, though. I cannot assert it would work under complicated scenarios involving multiple packages, but you can try at your own risk. The workaround is as follow (you can use a different folder than your $HOME, I just use it for convenience): let LIBMPI_SO=/path/to/mpi_root/lib/libmpi.so
mkdir -p $HOME/lib
ln -s $LIBMPI_SO $HOME/lib/libmpi.so.12
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/lib |
Beta Was this translation helpful? Give feedback.
-
This is a standard HPE (formerly SGI, I guess, since I believe mpt was inherited from SGI as per https://downloads.linux.hpe.com/SDR/project/mpi/) installation in this case, so while it may or may not be a good idea, I don't think it's a random person making a personal decision.
|
Beta Was this translation helpful? Give feedback.
Sorry, I am not sure what you're trying to achieve. Let me ask this: Have you tried the "external MPI" trick that conda-forge designed to support HPC clusters?
https://conda-forge.org/docs/user/tipsandtricks.html#using-external-message-passing-interface-mpi-libraries
Basically, if you are using MPICH/Open MPI (or their ABI-compatible variants), you can have an empty MPI package with real mpi4py & co installed from conda-forge. Then, you just need to ensure your own MPI implementation is supplied and locatable by the dynamic linker. No need to rebuild anything.