You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Run python tasks.py first to enqueue some tasks, then run celery -A prio:app worker --concurrency 1 to consume them.
Expected Behavior
Tasks are enqueued in the Redis key matching the tasks' priority.
Actual Behavior
When spawning a couple of tasks (see Minimally Reproducible Test Case) with a default priority of 50 and one task with a higher priority of 75, all tasks are enqueued into the highest priority key (in this case celery, which I assume is an alias for celery:0) instead of celery:50 and celery:75, respectively.
You can see this behavior using redis-cli MONITOR (task body truncated):
But since all tasks ended up in the same key, all queued tasks are effectively worked on in the order they were enqueued, NOT by their priority.
I also tested this using 'priority_steps': [1, 25, 50, 75, 100] because I was wary of seeing the Redis key celery instead of the expected celery:0, but it yields the exact same results. All tasks are pushed into celery:1, just as they were pushed into celery before.
The text was updated successfully, but these errors were encountered:
Checklist
main
branch of Celery.contribution guide
on reporting bugs.
for similar or identical bug reports.
for existing proposed fixes.
to find out if the bug was already fixed in the main branch.
in this issue (If there are none, check this box anyway).
Mandatory Debugging Information
celery -A proj report
in the issue.(if you are not able to do this, then at least specify the Celery
version affected).
main
branch of Celery.pip freeze
in the issue.to reproduce this bug.
Optional Debugging Information
and/or implementation.
result backend.
broker and/or result backend.
ETA/Countdown & rate limits disabled.
and/or upgrading Celery and its dependencies.
Related Issues and Possible Duplicates
Related Issues
Possible Duplicates
Environment & Settings
Celery version: 5.3.6
celery report
Output:Steps to Reproduce
Required Dependencies
Python Packages
pip freeze
Output:Other Dependencies
N/A
Minimally Reproducible Test Case
Run
python tasks.py
first to enqueue some tasks, then runcelery -A prio:app worker --concurrency 1
to consume them.Expected Behavior
Tasks are enqueued in the Redis key matching the tasks' priority.
Actual Behavior
When spawning a couple of tasks (see Minimally Reproducible Test Case) with a default priority of 50 and one task with a higher priority of 75, all tasks are enqueued into the highest priority key (in this case
celery
, which I assume is an alias forcelery:0
) instead ofcelery:50
andcelery:75
, respectively.You can see this behavior using
redis-cli MONITOR
(task body truncated):The worker polls tasks using the correct Redis command:
But since all tasks ended up in the same key, all queued tasks are effectively worked on in the order they were enqueued, NOT by their priority.
I also tested this using
'priority_steps': [1, 25, 50, 75, 100]
because I was wary of seeing the Redis keycelery
instead of the expectedcelery:0
, but it yields the exact same results. All tasks are pushed intocelery:1
, just as they were pushed intocelery
before.The text was updated successfully, but these errors were encountered: