Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error: _multiprocessing.SemLock( FileNotFoundError: [Errno 2] No such file or directory #33

Open
TYTaO opened this issue Jun 25, 2022 · 1 comment

Comments

@TYTaO
Copy link

TYTaO commented Jun 25, 2022

I successfully ran the pytorch example. But when i use datalaoder prefetch_factor in pytorch example.
like this

train_loader = torch.utils.data.DataLoader(
        datasets.MNIST('data', train=True, download=True,
                       transform=transforms.Compose([
                           transforms.ToTensor(),
                           transforms.Normalize((0.1307,), (0.3081,))
                       ])),
        batch_size=BATCH_SIZE, shuffle=True, num_workers=2, prefetch_factor=2)

i got this error:

Traceback (most recent call last):
  File "//./pytorchexample.py", line 112, in <module>
    train(model, DEVICE, train_loader, optimizer, epoch)
  File "//./pytorchexample.py", line 49, in train
    for batch_idx, (data, target) in enumerate(train_loader):
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/dataloader.py", line 368, in __iter__
    return self._get_iterator()
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/dataloader.py", line 314, in _get_iterator
    return _MultiProcessingDataLoaderIter(self)
  File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/dataloader.py", line 900, in __init__
    self._worker_result_queue = multiprocessing_context.Queue()  # type: ignore[var-annotated]
  File "/usr/lib/python3.9/multiprocessing/context.py", line 103, in Queue
    return Queue(maxsize, ctx=self.get_context())
  File "/usr/lib/python3.9/multiprocessing/queues.py", line 43, in __init__
    self._rlock = ctx.Lock()
  File "/usr/lib/python3.9/multiprocessing/context.py", line 68, in Lock
    return Lock(ctx=self.get_context())
  File "/usr/lib/python3.9/multiprocessing/synchronize.py", line 162, in __init__
    SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)
  File "/usr/lib/python3.9/multiprocessing/synchronize.py", line 57, in __init__
    sl = self._semlock = _multiprocessing.SemLock(
FileNotFoundError: [Errno 2] No such file or directory
@dimakuv
Copy link
Contributor

dimakuv commented Jun 27, 2022

Duplicate of gramineproject/graphene#2689

Unfortunately, it is a known issue -- Gramine doesn't support the multiprocessing package of Python. This is because Gramine currently doesn't support Sys-V semaphores, which the multiprocessing package requires.

UPDATE 24. March 2023. Python's multiprocessing package uses POSIX semaphores (and thus shared memory) and not Sys-V semaphores. See:

That's unfortunate, because implementing POSIX semaphores in Gramine/SGX would require allowing untrusted shared memory (/dev/shm), which will probably never happen...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants