New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RunTimeError: Acquire on closed pool try to use control inspect #4410
Labels
Comments
I've been seeing this all over the place after upgrading to celery 4 from 3.1, pretty much anywhere I need call into the celery app controller this is needed: |
plz send a pr if you have any proposed solution. not sure if it is fixed in master, so you could also try master latest changes |
ping if it still exists in 4.4+ |
humitos
added a commit
to readthedocs/readthedocs.org
that referenced
this issue
Dec 28, 2020
We are receiving "Acquire on closed pool" error randomly after running instances for more than ~1 day and calling `self.app.control.cancel_consumer` from our task that kills VM instances. This seems to be a known problem in 3.x Celery verions but some users reported that it's still present in <4.4. - celery/celery#4410 (comment) - https://stackoverflow.com/questions/36789805/celery-kombu-fails-after-self-connections-acquire - celery/celery#1839 Celery released 5.x on September, so I'm upgrading to it directly as a test. If everything keep working together, we can leave it. Otherwise, we can go back to latest 4.4.x series: 4.4.7.
humitos
added a commit
to readthedocs/readthedocs.org
that referenced
this issue
Feb 18, 2021
We are receiving "Acquire on closed pool" error randomly after running instances for more than ~1 day and calling `self.app.control.cancel_consumer` from our task that kills VM instances. This seems to be a known problem in 3.x Celery verions but some users reported that it's still present in <4.4. - celery/celery#4410 (comment) - https://stackoverflow.com/questions/36789805/celery-kombu-fails-after-self-connections-acquire - celery/celery#1839 Celery released 5.x on September, so I'm upgrading to it directly as a test. If everything keep working together, we can leave it. Otherwise, we can go back to latest 4.4.x series: 4.4.7.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Checklist
celery -A proj report
in the issue.(if you are not able to do this, then at least specify the Celery
version affected).
software -> celery:4.0.2 (latentcall) kombu:4.1.0 py:2.7.13 or (py:2.7.12)
billiard:3.5.0.3 redis:2.10.5
platform -> system:Darwin arch:64bit imp:CPython . (though usually: system:Linux arch:64bit, ELF)
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:redis://localhost:6380/
BROKER_TRANSPORT_OPTIONS: {
'fanout_patterns': True, 'fanout_prefix': True}
CELERY_TASK_COMPRESSION: 'gzip'
CELERY_TIMEZONE: 'UTC'
CELERY_RESULT_SERIALIZER: 'json'
CELERY_BROKER_URL: u'redis://localhost:6380//'
CELERY_TASK_SERIALIZER: 'json'
CELERY_RESULT_EXPIRES: 60
CELERY_ACCEPT_CONTENT: ['application/json']
TIME_ZONE: 'UTC'
CELERY_MESSAGE_COMPRESSION: 'gzip'
CELERY_TASK_ALWAYS_EAGER: False
CELERY_RESULT_BACKEND: u'redis://localhost:6380/'
master
branch of Celery.Also occurs on celery 4.1.0.
Steps to reproduce
Try to use the
control
module. In my case, I'm getting theactive_queues()
.Expected behavior
I expect that so long as the system is in a good state, I should be able to get the info from within the
control
module. I don't understand exactly why sometimes the pool is closed and other times it's not.Actual behavior
This might be the same issue as in #1839
The code will have a runtime error, so I am unable to query for the data I need from celery.
This only happens when we're using the
control
module. Sometimes it works okay.This code path was even in a retry-loop, so in the end it still failed to execute.
The text was updated successfully, but these errors were encountered: