Skip to content

Installation FAQ

Jean-Noël Grad edited this page May 22, 2024 · 15 revisions

Table of Contents

Installation

From sources

Installation instructions for Linux and macOS are available online for the following releases:

From packages

The software is available via package managers on:

In a Docker container

Dockerfiles with fully functional build environments can be found for multiple Linux operating systems in espressomd/docker. There is one branch per ESPResSo minor release. The docker images can be downloaded from GitHub (espressomd/packages) or DockerHub (espressomd).

Here is a general procedure to setup a Jupyter-ready ESPResSo installation in a Docker container that can spawn a local server in the container that is accessible from outside. If the installation commands or Docker image get out of date, simply update this bash script based on the tutorials-samples-maxset job in the top-level .gitlab-ci.yml file. If you have a compatible NVIDIA GPU, you can make it visible by adding --runtime=nvidia or --gpus device=0 to the Docker command and then set the environment variable with_cuda=true to build ESPResSo with GPU support.

docker run --user espresso -p 8888:8888 -it ghcr.io/espressomd/docker/ubuntu:f7f8ef2c0ca93c67aa16b9f91785492fb04ecc1b bash
pip3 install --user jupyter_contrib_nbextensions
jupyter contrib nbextension install --user
jupyter nbextension enable rubberband/main
jupyter nbextension enable exercise2/main
git clone --depth=1 --recursive -b python https://github.com/espressomd/espresso.git
cd espresso
export CC=gcc-8 CXX=g++-8 myconfig=maxset with_cuda=false make_check_unit_tests=false make_check_python=false
export build_procs=$(nproc) check_procs=$(nproc)
bash maintainer/CI/build_cmake.sh
cd build
make tutorials
cd doc/tutorials
../../ipypresso notebook --ip 0.0.0.0 --no-browser

The output of the ipypresso command will show the localhost URL and the token, usually in this form:

  To access the notebook, copy and paste one of these URLs:
      http://127.0.0.1:8888/?token=d47146dbd0ad986250730bb091c991f5dadc3b6ec4c75e59
   or localhost:8888/?token=d47146dbd0ad986250730bb091c991f5dadc3b6ec4c75e59

Copy one of these URLs in your browser to connect to the local server. If port 8888 is already busy, you can use another port with e.g. -p 8891:8888 in the Docker command; in that case the output of ipypresso will incorrectly show 8888 in the URL and you will have to replace it with 8891.

In a Gitpod environment

Users with a registered account on Gitpod can create a workspace suitable for building ESPResSo. The workspace automatically pulls dependencies and builds the source code in the default configuration. This usually takes 15 min. Here is the direct link:

https://gitpod.io/#https://github.com/espressomd/espresso.

Once the build has completed, use the terminal to run simulation scripts:

$ ./pypresso ../samples/p3m.py

or start a JupyterLab session:

$ cd ${GITPOD_REPO_ROOT}/build/doc/tutorials
$ ../../ipypresso lab --NotebookApp.allow_origin="$(gp url 8888)" \
    --port=8888 --no-browser

In the bottom-right corner of the IDE, a notification will appear in a box to mention port 8888 is now active. Click the orange button "Make public" to open that port and then Ctrl+click one of the urls in the terminal output to open JupyterLab in a pop-up window. Some web browsers may block the pop-up and ask you to confirm opening an url that starts with https://8888-espressomd-espresso-, if that's the case, click on "Allow" to open JupyterLab in a new browser tab. For more information, please refer to the relevant section in the user guide.

In a Binder instance

The Binder platform offers free spot instances limited to 1 core and 2 GB of RAM. No user account is needed. Here is the direct link:

https://mybinder.org/v2/gh/jngrad/espresso-binder/HEAD

The workspace automatically pulls a Docker image containing a pre-built version of ESPResSo 4.2.0 in its default configuration, and then pip installs its python dependencies. This usually takes less than 4 min. In the main Launcher window, you can open a terminal to run simulation scripts:

$ python samples/p3m.py

Note how python is used instead of pypresso. To run tutorials, navigate to the tutorials/exercises folder and open one of the notebooks. The solutions are hidden but can be revealed by clicking on the "Show solutions" buttons. The other folder tutorials/solutions contains the same notebooks with the solutions already copy-pasted in the code cells. For more details, please refer to the July 14 2022 mailing list thread.

Troubleshooting

CMake issues

CMake version is too old

You can install a more recent version of CMake with:

python3 -m pip install --user 'cmake>=3.29'

If you need ccmake (the curses interface to CMake), you will need to compile CMake from sources.

CMake cannot find ScaFaCoS

Follow the installation instructions in build-and-install-scafacos.sh. The --enable-portable-binary flag disables several architecture-specific optimizations.

This will install ScaFaCoS system-wide. To install it in a custom directory, add --prefix=${HOME}/bin/scafacos to the ./configure command, remove the ldconfig command, and add the following to your ~/.bashrc file after the installation:

export LD_LIBRARY_PATH="${LD_LIBRARY_PATH:+$LD_LIBRARY_PATH:}${HOME}/bin/scafacos/lib"
export PKG_CONFIG_PATH="${PKG_CONFIG_PATH:+$PKG_CONFIG_PATH:}${HOME}/bin/scafacos/lib/pkgconfig"

Compilation issues

Compiler error with GCC 11

ESPResSo 4.1.3 and 4.1.4 are missing includes. Apply the patch for 4.1.3 (rpms/espresso@77dad47a) or 4.1.4 (rpms/espresso@e7dd6efa).

Compiler error with boost::optional

Error message:

/usr/include/boost/serialization/optional.hpp:98:8: error: ‘version’ is not a class template
   98 | struct version<boost::optional<T> > {
or
/usr/include/boost/serialization/optional.hpp:98:8: error: explicit specialization of undeclared template struct ‘version'
/usr/include/boost/serialization/version.hpp:36:8: error: redefinition of ‘version'

ESPResSo 4.1.3 and 4.1.4 are hit by a bug in Boost 1.74.0. Apply the patch in #3864 (comment).

Cannot import io when building the Cython code

Error message: Fatal Python error: Py_Initialize: can't initialize sys standard streams (#3149).

This is due to a faulty $PYTHONPATH environment variable. This typically happens when writing the following statement in ~/.bashrc:

export PYTHONPATH=$PYTHONPATH:$HOME/bin/python-modules

If $PYTHONPATH was initially undefined or equal to the empty string, it is now equal to :/home/user/bin/python-modules. The colon symbol is used to separate folders. A leading colon means an empty string is part of the $PYTHONPATH, which is interpreted as the current working directory. The same happens with a trailing colon.

Here is a one-liner to remove the leading colon if $PYTHONPATH is empty:

export PYTHONPATH="${PYTHONPATH:+$PYTHONPATH:}${HOME}/bin/python-modules"

Works on Bash, Dash, C shell, Z shell.

Runtime issues

Import errors due to missing symbols

The following error message may appear when converting Cython modules to Python modules:

Traceback (most recent call last):
  File "simulation_script.py", line 6, in <module>
    import espressomd.reaction_ensemble
ImportError: /home/user/espresso/build/src/python/espressomd/reaction_ensemble.so: undefined symbol:
_ZN15ReactionMethods17ReactionAlgorithm12add_reactionEdRKSt6vectorIiSaIiEES5_S5_S5_

When Cython source files (.pyx files) are deleted and replaced with Python files, the build folder will contain a mix of Python modules (.py files) and Cython shared objects (.so files) with the same base name but with a different extension, e.g. reaction_ensemble.py and reaction_ensemble.so. When importing espressomd.reaction_ensemble, the Python interpreter may resolve the outdated .so file instead of the .py file, in which case it will attempt to load symbols that may no longer exist in the ESPResSo core. To remediate this issue, delete the outdated .so file.

Likewise, when Cython include files (.pxd files) are deleted, all existing .so files in the build folder will attempt to load symbols that may no longer exist in the ESPResSo core during import, because .pxd files are included in all .pyx files. This can lead to confusing error messages about missing CUDA shared objects or about missing symbols for a feature X in an unrelated module Y. Since CMake doesn't explicitly track .pxd files, it cannot detect their deletion and therefore will not flag the existing .so files as out-of-date. To remediate this issue, whenever a .pxd file is deleted, clear all generated Cython data with rm src/python/espressomd/*.cpp and re-build ESPResSo.

Import errors due to Open MPI processor affinity bug

The following error message may appear when importing espressomd in an interactive Python environment:

--------------------------------------------------------------------------
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  Setting processor affinity failed failed
  --> Returned value Error (-1) instead of ORTE_SUCCESS
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[coyote10:741448] Local abort before MPI_INIT completed completed successfully,
but am not able to aggregate error messages, and not able to guarantee that all
other processes were killed!

Open MPI versions 4.x have a regression that prevents running a MPI program in singleton mode, i.e. without mpiexec or mpirun, on certain NUMA architectures when the MCA binding policy is set to "numa". This bug is known to affect AMD Ryzen and AMD EPYC processors. This issue can be fixed by changing the MCA binding policy, for example to l3cache or none, using the following environment variables:

export OMPI_MCA_hwloc_base_binding_policy="l3cache"
export OMPI_MCA_rmaps_base_mapping_policy="l3cache"

Unit tests runtime error about libcuda.so

The following error message can appear if your OpenMPI library was built with CUDA support but your hardware doesn't have a CUDA-capable GPU:

--------------------------------------------------------------------------
The library attempted to open the following supporting CUDA libraries,
but each of them failed.  CUDA-aware support is disabled.
libcuda.so.1: cannot open shared object file: No such file or directory
libcuda.dylib: cannot open shared object file: No such file or directory
/usr/lib64/libcuda.so.1: cannot open shared object file: No such file or directory
/usr/lib64/libcuda.dylib: cannot open shared object file: No such file or directory
If you are not interested in CUDA-aware support, then run with
--mca opal_warn_on_missing_libcuda 0 to suppress this message.  If you are interested
in CUDA-aware support, then try setting LD_LIBRARY_PATH to the location
of libcuda.so.1 to get passed this issue.
--------------------------------------------------------------------------
If this message is generated by a Boost unit test, make sure that it was properly linked against Boost::mpi and MPI::MPI_CXX. Otherwise, you can remediate the issue with export OMPI_MCA_opal_cuda_support=0 or by reconfiguring the ESPResSo tests with cmake . -D MPIEXEC_PREFLAGS="--mca;opal_warn_on_missing_libcuda;0".