Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reading backend while it is being written sometimes throws an error #389

Open
Thalos12 opened this issue Jun 10, 2021 · 8 comments
Open

Comments

@Thalos12
Copy link

Thalos12 commented Jun 10, 2021

General information:

  • emcee version: 3.0.2
  • platform: Ubuntu 18.04
  • installation method (pip/conda/source/other?): conda

Problem description:

Expected behavior:

The backend (HDF5 file) can be read with no errors while the chain is running and the backend is being written.

Actual behavior:

The process writing to the backend sometimes raises an error when another process is trying to read the HDF5 file.
The errors, copied from the shell, is this one

Traceback (most recent call last):
  File "/home/mazzi/miniconda3/envs/pylegal/lib/python3.9/site-packages/h5py/_hl/files.py", line 202, in make_fid
    fid = h5f.open(name, h5f.ACC_RDWR, fapl=fapl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "h5py/h5f.pyx", line 96, in h5py.h5f.open
OSError: Unable to open file (unable to lock file, errno = 11, error message = 'Resource temporarily unavailable')

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/mazzi/Documenti/DOTTORATO/Progetti/sfhchain/code/mcmc.py", line 741, in <module>
    mcmc(settings)
  File "/home/mazzi/Documenti/DOTTORATO/Progetti/sfhchain/code/mcmc.py", line 296, in mcmc
    for sample in sampler.sample(pos[region_idx, :, :], iterations=STEPS, skip_initial_state_check=True, progress=False):
  File "/home/mazzi/miniconda3/envs/pylegal/lib/python3.9/site-packages/emcee/ensemble.py", line 351, in sample
    self.backend.save_step(state, accepted)
  File "/home/mazzi/miniconda3/envs/pylegal/lib/python3.9/site-packages/emcee/backends/hdf.py", line 206, in save_step
    with self.open("a") as f:
  File "/home/mazzi/miniconda3/envs/pylegal/lib/python3.9/site-packages/emcee/backends/hdf.py", line 67, in open
    f = h5py.File(self.filename, mode)
  File "/home/mazzi/miniconda3/envs/pylegal/lib/python3.9/site-packages/h5py/_hl/files.py", line 424, in __init__
    fid = make_fid(name, mode, userblock_size,
  File "/home/mazzi/miniconda3/envs/pylegal/lib/python3.9/site-packages/h5py/_hl/files.py", line 204, in make_fid
    fid = h5f.create(name, h5f.ACC_EXCL, fapl=fapl, fcpl=fcpl)
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
  File "h5py/h5f.pyx", line 116, in h5py.h5f.create
OSError: Unable to create file (unable to open file: name = 'results/DEBUG/chain-allstars_0000.hdf5', errno = 17, error message = 'File exists', flags = 15, o_flags = c2)

What have you tried so far?:

I tried setting read_only=True when instantiating the HDFBackend in the script that tries to read the backend, but the problem was not solved.

Minimal example:

Run a chain using writer.py and read multiple times with reader.py. After a few tries the error should appear.

  • writer.py
import time
import emcee
import numpy as np

def lnprob(x):
    time.sleep(0.01)
    return 0.

nwalkers = 100
nsteps = 10000

backend = emcee.backends.HDFBackend('backend.h5')
backend.reset(nwalkers,1)

sampler = emcee.EnsembleSampler(nwalkers,1,lnprob,backend=backend)

pos0 = np.ones(nwalkers) + ((np.random.random(nwalkers)-0.5)*2e-3)
print(pos0.shape)
sampler.run_mcmc(pos0[:, None],nsteps,progress=True)
  • 'reader.py`
import emcee

backend = emcee.backends.HDFBackend('backend.h5',read_only=True)
chain = backend.get_chain()

Edit for the sake of completeness: while the example above does not use multiprocessing, in my actual code I do use it. I see the error both with and without mutiprocessing.

@axiezai
Copy link

axiezai commented Jun 10, 2021

Hi,

I am running into the same error when using mpiexec for a parallel sampling script with schwimmbad's MultiPool or when I try to do mpirun python sampling_script.py with multiprocessing's pool.

I am assuming this is because each worker is trying to access the same file and the workers are conflicting each other? This error does NOT appear when I simply call python sampling_script.py in terminal. Only happens when I try mpiexec or mpirun.

I see that the origin issue does not use any parallelization so I'm not sure if I'm helping or should create a new issue.

My python code is simply:

# Lots of set up like load in data and define likelihood function

if __name__ == '__main__':
    file_name = '../data/sub-{}_mcmc_fit.h5'.format(sub_id)
    backend = emcee.backends.HDFBackend(file_name)
    backend.reset(nwalkers, ndim)

    # initialize parallel processing samplers:
    with MultiPool() as pool:
        sampler = emcee.EnsembleSampler(
            nwalkers,
            ndim,
            log_probability,
            pool=pool,
            backend=backend
        )
        sampler.run_mcmc(pos, nsteps, progress=True);

EDIT:
I just realized my error is related to #310 (comment), which could be a solution for saving backends while using mpiexec.

@dfm
Copy link
Owner

dfm commented Jun 11, 2021

@Thalos12: Thanks for the detailed code! I'm not sure if I have too much to suggest here because this isn't really a supported use case for this backend and it looks like it's a deeper h5py issue rather than something specific to the emcee implementation, but I could be wrong. I'm happy to leave this open if someone wants to try to build support for this workflow.

@axiezai: I think that your issue is not related. Instead, it looks like you've forgotten to include:

if not pool.is_master():
    pool.wait()
    sys.exit(0)

Which is required for use of the MPIPool (see docs here and here).

@Thalos12
Copy link
Author

Hi @dfm, I understand that it is not a supported use case, but it would be useful to me because I have long running chains and I would like to sometimes check how they are performing. After reading a bit about how HDF5 works I have found that it has a Single Writer Multiple Reader mode (https://docs.h5py.org/en/stable/swmr.html) that might be the solution I am looking for.
There are, however, a few caveats:

  • the SWMR mode requires at least HDF5 1.10
  • the SWMR mode has to be activated in two different ways depending on whether the file is read or written

If you are not against it, I would like to try to add support for this, I think it might be helpful.

@axiezai
Copy link

axiezai commented Jun 11, 2021

@dfm thank you for pointing this out, I totally missed it... I edited my code accordingly and it turns out its just not waiting for the master process to finish, I also have to define the backends and a few other things inside the pool, now the following code is working:

if __name__ == '__main__':
    # initialize parallel processing samplers:
    with MPIPool() as pool:
        if not pool.is_master():
            pool.wait()
            sys.exit(0)

        # mcmc setup
        pos = parameters + 1e-2*np.random.randn(28,7)
        nwalkers, ndim = pos.shape
        nsteps = 50000

        # backend:
        file_name = '../data/sub-{}_mcmc_fit.h5'.format(sub_id)
        backend = emcee.backends.HDFBackend(file_name)
        backend.reset(nwalkers, ndim)

        sampler = emcee.EnsembleSampler(nwalkers, ndim, nmm.log_probability, pool=pool, backend=backend)
        sampler.run_mcmc(pos, nsteps, progress=True);

Just documenting this in case other new users run into the same problem with MPIPool.

@dfm
Copy link
Owner

dfm commented Jun 11, 2021

@Thalos12: great! I'd be happy to review such a PR!

@Thalos12
Copy link
Author

Hi @dfm, I played a bit with the SWMR mode and below is what I found.

To use SWMR the following has to happen:

  1. the writer opens(creates) the file, switches it to SWMR mode and keeps it open for writing
  2. the reader opens the file (mode='r') and gets the data while the HDF file is being written.

What's important is that the reader must open the file after the writer.
In emcee the file is opened and closed each step of the chain to save the position (HDFBackend.save_step). Therefore, it may happen that a reader opens the file for reading just before the writer tries to append the latest step, crashing the writer. SWMR cannot prevent this as it only helps avoiding crashes when the file reading happens while writing.

Nonetheless, I might have found another solution in this thread. In short, the environment variable HDF5_USE_FILE_LOCKING can be set to "FALSE" for the reader only, to deactivate the HDF file locking. This way, the writer still has the lock, but the reader does not need to acquire it and if the file is being read before it is written to, the writer does not crash. However, SWMR has to be activated to handle the case when the file is being read while it is being written to. Note: the user would have to set the flag in the script that reads the file.

Unfortunately, while the reader has HDF locking disabled it can still crash sometimes, but the writer survives and this, in my opinion, is a reasonable trade-off.

I could make a PR in a few days if you are still interested in this feature.

@dfm
Copy link
Owner

dfm commented Jul 12, 2021

@Thalos12: Thanks for looking into this in such detail! Yes - I would be very happy to have this implemented. Can you also include a tutorial in the PR? I'm happy to fill out the details and help with formatting if you can at least get the example implemented. Thanks again!

@Thalos12
Copy link
Author

Thank you! I will add a Jupyter notebook with the example and I will surely take your help in formatting it properly. I should be able to submit in the next few days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants