Using Celery in a critical production environment #8669
-
Hi everyone! Currently we are evaluating Celery to see if we can use this technology as our task queue framework. However, we are in a situation where it is critical that tasks are performed but also results are delivered. One of the main points that I have been unable to answer with the documentation is how to proceed when the Celery app/client is restarted and the references to any outstanding tasks are lost. How can you retrieve the result once the Celery app is restarted? Steps:
What have I researched myself:
So none of the options I have researched have worked yet. How would we go about picking up all the existing tasks with a fresh Celery app/client instance? Or is the paradigm to purge/kill everything and start all tasks fresh on a reboot of the app? Thanks in advance! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Answered my own question! You can use
|
Beta Was this translation helpful? Give feedback.
Answered my own question!
You can use
AsyncResult('<taskid>')
but only when you use the database type backend. This is also stated here in the docs: https://docs.celeryq.dev/en/latest/userguide/tasks.html#rpc-result-backend-rabbitmq-qpidThe RPC result backend (rpc://) is special as it doesn’t actually store the states, but rather sends them as messages. This is an important difference as it means that a result can only be retrieved once, and only by the client that initiated the task. Two different processes can’t wait for the same result.