Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RunTimeError: Acquire on closed pool try to use control inspect #4410

Closed
2 tasks done
jheld opened this issue Nov 28, 2017 · 3 comments
Closed
2 tasks done

RunTimeError: Acquire on closed pool try to use control inspect #4410

jheld opened this issue Nov 28, 2017 · 3 comments

Comments

@jheld
Copy link
Contributor

jheld commented Nov 28, 2017

Checklist

  • I have included the output of celery -A proj report in the issue.
    (if you are not able to do this, then at least specify the Celery
    version affected).

software -> celery:4.0.2 (latentcall) kombu:4.1.0 py:2.7.13 or (py:2.7.12)
billiard:3.5.0.3 redis:2.10.5
platform -> system:Darwin arch:64bit imp:CPython . (though usually: system:Linux arch:64bit, ELF)
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:redis://localhost:6380/

BROKER_TRANSPORT_OPTIONS: {
'fanout_patterns': True, 'fanout_prefix': True}
CELERY_TASK_COMPRESSION: 'gzip'
CELERY_TIMEZONE: 'UTC'
CELERY_RESULT_SERIALIZER: 'json'
CELERY_BROKER_URL: u'redis://localhost:6380//'
CELERY_TASK_SERIALIZER: 'json'
CELERY_RESULT_EXPIRES: 60
CELERY_ACCEPT_CONTENT: ['application/json']
TIME_ZONE: 'UTC'
CELERY_MESSAGE_COMPRESSION: 'gzip'
CELERY_TASK_ALWAYS_EAGER: False
CELERY_RESULT_BACKEND: u'redis://localhost:6380/'

  • I have verified that the issue exists against the master branch of Celery.

Also occurs on celery 4.1.0.

Steps to reproduce

Try to use the control module. In my case, I'm getting the active_queues().

Expected behavior

I expect that so long as the system is in a good state, I should be able to get the info from within the control module. I don't understand exactly why sometimes the pool is closed and other times it's not.

Actual behavior

This might be the same issue as in #1839

The code will have a runtime error, so I am unable to query for the data I need from celery.

File "/.../tasks.py", line 80, in workers_on_queue
    for k, v in six.viewitems(celery_app.control.inspect().active_queues()):
  File "/.../lib/python2.7/site-packages/celery/app/control.py", line 116, in active_queues
    return self._request('active_queues')
  File "/.../lib/python2.7/site-packages/celery/app/control.py", line 81, in _request
    timeout=self.timeout, reply=True,
  File "/.../lib/python2.7/site-packages/celery/app/control.py", line 436, in broadcast
    limit, callback, channel=channel,
  File "/.../lib/python2.7/site-packages/kombu/pidbox.py", line 315, in _broadcast
    serializer=serializer)
  File "/.../lib/python2.7/site-packages/kombu/pidbox.py", line 285, in _publish
    with self.producer_or_acquire(producer, chan) as producer:
  File "/usr/local/Cellar/python/2.7.13_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 17, in __enter__
    return self.gen.next()
  File "/.../lib/python2.7/site-packages/kombu/pidbox.py", line 247, in producer_or_acquire
    with self.producer_pool.acquire() as producer:
  File "/.../lib/python2.7/site-packages/kombu/resource.py", line 74, in acquire
    raise RuntimeError('Acquire on closed pool')

This only happens when we're using the control module. Sometimes it works okay.

This code path was even in a retry-loop, so in the end it still failed to execute.

@matburt
Copy link

matburt commented Nov 28, 2017

I've been seeing this all over the place after upgrading to celery 4 from 3.1, pretty much anywhere I need call into the celery app controller this is needed:

ansible/awx@9ee77a9

@auvipy
Copy link
Member

auvipy commented Dec 19, 2017

plz send a pr if you have any proposed solution. not sure if it is fixed in master, so you could also try master latest changes

@auvipy auvipy added this to the v4.2 milestone Dec 19, 2017
@auvipy auvipy modified the milestones: v4.2, v5.0.0 Jan 13, 2018
@auvipy auvipy modified the milestones: v5.0.0, v4.3 Jul 3, 2018
@auvipy auvipy modified the milestones: v4.3, v5.0.0 Nov 17, 2018
@auvipy auvipy removed this from the v5.0.0 milestone Jun 26, 2019
@auvipy
Copy link
Member

auvipy commented Jun 26, 2019

ping if it still exists in 4.4+

@auvipy auvipy closed this as completed Jun 26, 2019
humitos added a commit to readthedocs/readthedocs.org that referenced this issue Dec 28, 2020
We are receiving "Acquire on closed pool" error randomly after running instances
for more than ~1 day and calling `self.app.control.cancel_consumer` from our
task that kills VM instances.

This seems to be a known problem in 3.x Celery verions but some users reported
that it's still present in <4.4.

- celery/celery#4410 (comment)
- https://stackoverflow.com/questions/36789805/celery-kombu-fails-after-self-connections-acquire
- celery/celery#1839

Celery released 5.x on September, so I'm upgrading to it directly as a test. If
everything keep working together, we can leave it. Otherwise, we can go back to
latest 4.4.x series: 4.4.7.
humitos added a commit to readthedocs/readthedocs.org that referenced this issue Feb 18, 2021
We are receiving "Acquire on closed pool" error randomly after running instances
for more than ~1 day and calling `self.app.control.cancel_consumer` from our
task that kills VM instances.

This seems to be a known problem in 3.x Celery verions but some users reported
that it's still present in <4.4.

- celery/celery#4410 (comment)
- https://stackoverflow.com/questions/36789805/celery-kombu-fails-after-self-connections-acquire
- celery/celery#1839

Celery released 5.x on September, so I'm upgrading to it directly as a test. If
everything keep working together, we can leave it. Otherwise, we can go back to
latest 4.4.x series: 4.4.7.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants