Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eventloop blocked on queue.get on subprocess death #98

Open
neilfulwiler opened this issue Jun 4, 2021 · 1 comment
Open

eventloop blocked on queue.get on subprocess death #98

neilfulwiler opened this issue Jun 4, 2021 · 1 comment

Comments

@neilfulwiler
Copy link

Description

I think what I'm seeing is that the event loop gets blocked on queue.get() when the subprocess dies while sending data through the result queue, the repro below shows the process blocks the event loop (blocked on connection._recv) and it fails to make any more progress. It seems like the queue.recv in multiprocess.connection does several recv()s and if the subprocess dies in the middle of a send to the pipe this can happen. It seems like this would "normally" be guarded against by an EOF on the receive end of the pipe, but we don't (and can't) close the writer end of the pipe in the parent process because we need to pass it into new subprocesses.

I see that aiomultiprocess attempts to detect dead processes and restart them, is that just best effort/is this expected behavior?

minimal repro:

import asyncio
import os
from aiomultiprocess import pool


async def f():
    return ["absc"*1000000]


async def g():
    await asyncio.sleep(.05)
    os.kill(os.getpid(), 9)


async def hello():
    while True:
        print('still alive')
        await asyncio.sleep(1)


if __name__ == '__main__':
    async def main():
        asyncio.create_task(hello())
        async with pool.Pool(processes=1) as p:
            await asyncio.gather(
                p.apply(f, ()),
                p.apply(f, ()),
                p.apply(f, ()),
                p.apply(f, ()),
                p.apply(f, ()),
                p.apply(f, ()),
                p.apply(g, ()),
            )


    asyncio.run(main())

Details

  • OS:
  • Python version: 3.8.9
  • aiomultiprocess version: 0.8.0
  • Can you repro on master? yes
  • Can you repro in a clean virtualenv? yes
@hardik1997
Copy link

I'm also facing a similar issue. I think the issue can be solved by using Manager().Queue instead of multiprocessing.Queue as stated in the official multiprocessing doc.
Refs: https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Process.terminate

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants