You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using a rabbitmq broker and a redis backend, I encounter issues when trying to have an errback on a chord which is itself chained with an other task.
Here is a code sample:
In this sample code I force the 4th header task to raise an exception, leading to a chord dependency failure. But the errback is not called, and the result tracking breaks. It seems that during the teardown of the last header task in the group, the redis backend on_chord_part_return does not find the errback, as it is linked to the task in the chain, not the chain itself. Plus the fallback to mark the current task status fails as well, because the chain has no task_id ? (this one I dont really understand yet ...). Here is the stacktrace:
stacktrace
[2024-01-24 16:48:06,414: INFO/MainProcess] Task random_value[159ce940-6699-481b-abb1-27a896820d25] received
[2024-01-24 16:48:06,420: INFO/ForkPoolWorker-1] Task random_value[159ce940-6699-481b-abb1-27a896820d25] succeeded in 0.00460624904371798s: 39
[2024-01-24 16:48:06,426: INFO/MainProcess] Task random_value[573a2f19-4fbe-410d-a75a-dca42f88f65f] received
[2024-01-24 16:48:06,434: INFO/ForkPoolWorker-1] Task random_value[573a2f19-4fbe-410d-a75a-dca42f88f65f] succeeded in 0.006238267058506608s: 93
[2024-01-24 16:48:06,440: INFO/MainProcess] Task random_value[7ac10fac-6091-4fd1-af43-6a246da2223f] received
[2024-01-24 16:48:06,451: INFO/ForkPoolWorker-1] Task random_value[7ac10fac-6091-4fd1-af43-6a246da2223f] raised expected: CustomException('generic')
[2024-01-24 16:48:06,458: INFO/MainProcess] Task random_value[d2cb7b70-db75-4509-86e2-bf4f9a33c93b] received
[2024-01-24 16:48:06,466: INFO/ForkPoolWorker-1] Task random_value[d2cb7b70-db75-4509-86e2-bf4f9a33c93b] succeeded in 0.002595628146082163s: 19
[2024-01-24 16:48:06,475: INFO/MainProcess] Task random_value[db259bd9-d16c-427b-a506-090319645f05] received
[2024-01-24 16:48:06,486: ERROR/ForkPoolWorker-1] Chord '27a36507-3719-404f-bac1-219a2857e4a0' raised: ChordError("Dependency 7ac10fac-6091-4fd1-af43-6a246da2223f raised CustomException('generic')")
Traceback (most recent call last):
File "/**/.venv/lib/python3.11/site-packages/celery/backends/redis.py", line 528, in on_chord_part_return
resl = [unpack(tup, decode) for tup in resl]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/**/.venv/lib/python3.11/site-packages/celery/backends/redis.py", line 528, in <listcomp>
resl = [unpack(tup, decode) for tup in resl]
^^^^^^^^^^^^^^^^^^^
File "/**/.venv/lib/python3.11/site-packages/celery/backends/redis.py", line 434, in _unpack_chord_result
raise ChordError(f'Dependency {tid} raised {retval!r}')
celery.exceptions.ChordError: Dependency 7ac10fac-6091-4fd1-af43-6a246da2223f raised CustomException('generic')
[2024-01-24 16:48:06,501: WARNING/ForkPoolWorker-1] /**/.venv/lib/python3.11/site-packages/celery/app/trace.py:686: RuntimeWarning: Exception raised outside body: ValueError('task_id must not be empty. Got None instead.'):
Traceback (most recent call last):
File "/**/.venv/lib/python3.11/site-packages/celery/backends/redis.py", line 528, in on_chord_part_return
resl = [unpack(tup, decode) for tup in resl]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/**/.venv/lib/python3.11/site-packages/celery/backends/redis.py", line 528, in <listcomp>
resl = [unpack(tup, decode) for tup in resl]
^^^^^^^^^^^^^^^^^^^
File "/**/.venv/lib/python3.11/site-packages/celery/backends/redis.py", line 434, in _unpack_chord_result
raise ChordError(f'Dependency {tid} raised {retval!r}')
celery.exceptions.ChordError: Dependency 7ac10fac-6091-4fd1-af43-6a246da2223f raised CustomException('generic')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/**/.venv/lib/python3.11/site-packages/celery/app/trace.py", line 544, in trace_task
task.backend.mark_as_done(
File "/**/.venv/lib/python3.11/site-packages/celery/backends/base.py", line 159, in mark_as_done
self.on_chord_part_return(request, state, result)
File "/**/.venv/lib/python3.11/site-packages/celery/backends/redis.py", line 547, in on_chord_part_return
return self.chord_error_from_stack(callback, exc)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/**/.venv/lib/python3.11/site-packages/celery/backends/base.py", line 304, in chord_error_from_stack
return backend.fail_from_current_stack(callback.id, exc=exc)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/**/.venv/lib/python3.11/site-packages/celery/backends/base.py", line 311, in fail_from_current_stack
self.mark_as_failure(task_id, exc, exception_info.traceback)
File "/**/.venv/lib/python3.11/site-packages/celery/backends/base.py", line 167, in mark_as_failure
self.store_result(task_id, exc, state,
File "/**/.venv/lib/python3.11/site-packages/celery/backends/base.py", line 526, in store_result
self._store_result(task_id, result, state, traceback,
File "/**/.venv/lib/python3.11/site-packages/celery/backends/base.py", line 973, in _store_result
current_meta = self._get_task_meta_for(task_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/**/.venv/lib/python3.11/site-packages/celery/backends/base.py", line 995, in _get_task_meta_for
meta = self.get(self.get_key_for_task(task_id))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/**/.venv/lib/python3.11/site-packages/celery/backends/base.py", line 869, in get_key_for_task
raise ValueError(f'task_id must not be empty. Got {task_id} instead.')
ValueError: task_id must not be empty. Got None instead.
warn(RuntimeWarning(
[2024-01-24 16:48:06,507: ERROR/ForkPoolWorker-1] Task random_value[db259bd9-d16c-427b-a506-090319645f05] raised unexpected: ValueError('task_id must not be empty. Got None instead.')
Traceback (most recent call last):
File "/**/.venv/lib/python3.11/site-packages/celery/backends/redis.py", line 528, in on_chord_part_return
resl = [unpack(tup, decode) for tup in resl]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/**/.venv/lib/python3.11/site-packages/celery/backends/redis.py", line 528, in <listcomp>
resl = [unpack(tup, decode) for tup in resl]
^^^^^^^^^^^^^^^^^^^
File "/**/.venv/lib/python3.11/site-packages/celery/backends/redis.py", line 434, in _unpack_chord_result
raise ChordError(f'Dependency {tid} raised {retval!r}')
celery.exceptions.ChordError: Dependency 7ac10fac-6091-4fd1-af43-6a246da2223f raised CustomException('generic')
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/**/.venv/lib/python3.11/site-packages/celery/app/trace.py", line 544, in trace_task
task.backend.mark_as_done(
File "/**/.venv/lib/python3.11/site-packages/celery/backends/base.py", line 159, in mark_as_done
self.on_chord_part_return(request, state, result)
File "/**/.venv/lib/python3.11/site-packages/celery/backends/redis.py", line 547, in on_chord_part_return
return self.chord_error_from_stack(callback, exc)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/**/.venv/lib/python3.11/site-packages/celery/backends/base.py", line 304, in chord_error_from_stack
return backend.fail_from_current_stack(callback.id, exc=exc)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/**/.venv/lib/python3.11/site-packages/celery/backends/base.py", line 311, in fail_from_current_stack
self.mark_as_failure(task_id, exc, exception_info.traceback)
File "/**/.venv/lib/python3.11/site-packages/celery/backends/base.py", line 167, in mark_as_failure
self.store_result(task_id, exc, state,
File "/**/.venv/lib/python3.11/site-packages/celery/backends/base.py", line 526, in store_result
self._store_result(task_id, result, state, traceback,
File "/**/.venv/lib/python3.11/site-packages/celery/backends/base.py", line 973, in _store_result
current_meta = self._get_task_meta_for(task_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/**/.venv/lib/python3.11/site-packages/celery/backends/base.py", line 995, in _get_task_meta_for
meta = self.get(self.get_key_for_task(task_id))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/**/.venv/lib/python3.11/site-packages/celery/backends/base.py", line 869, in get_key_for_task
raise ValueError(f'task_id must not be empty. Got {task_id} instead.')
ValueError: task_id must not be empty. Got None instead.
I dont really know if this canvas is broken somehow, but I would like to be able to handle the group specific errors in an errback task, and then the chain should obviously stop as one of its components failed. Maybe it could be solved by applying the errback to the chain instead but it is not really the same canvas, as the errback dedicated to the group/chord may not be able to handle errors ocurring in the following tasks in the chain.
As a side note, I should mention that in my real usecase, I'm revoking other tasks in the chord header to avoid continuing the execution if one of the tasks fails. The behaviour is even worse because when the last revoked header task tries to errback and mark its status, I have the same error and the task is reapplied (because of acks_late).
I should mention that applying the simple chord canvas works as expected, the errback is called and task status report is as-expected.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
Celery: 5.3.6
broker: rabbitmq
backend: redis
Using a rabbitmq broker and a redis backend, I encounter issues when trying to have an errback on a chord which is itself chained with an other task.
Here is a code sample:
In this sample code I force the 4th header task to raise an exception, leading to a chord dependency failure. But the errback is not called, and the result tracking breaks. It seems that during the teardown of the last header task in the group, the redis backend on_chord_part_return does not find the errback, as it is linked to the task in the chain, not the chain itself. Plus the fallback to mark the current task status fails as well, because the chain has no task_id ? (this one I dont really understand yet ...). Here is the stacktrace:
stacktrace
I dont really know if this canvas is broken somehow, but I would like to be able to handle the group specific errors in an errback task, and then the chain should obviously stop as one of its components failed. Maybe it could be solved by applying the errback to the chain instead but it is not really the same canvas, as the errback dedicated to the group/chord may not be able to handle errors ocurring in the following tasks in the chain.
As a side note, I should mention that in my real usecase, I'm revoking other tasks in the chord header to avoid continuing the execution if one of the tasks fails. The behaviour is even worse because when the last revoked header task tries to errback and mark its status, I have the same error and the task is reapplied (because of acks_late).
I should mention that applying the simple chord canvas works as expected, the errback is called and task status report is as-expected.
Any ideas/help on this ?
Thank you and sorry for the wall of text.
Beta Was this translation helpful? Give feedback.
All reactions