New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add_periodic_task function does not trigger the task #3589
Comments
I had the same problem :( |
Your example works well for me. NOTE: Your signal handler needs to accept **kwargs, failing to do so will be an error in the future. Using your example # file: tasks.py
from celery import Celery
celery = Celery('tasks', broker='pyamqp://guest@localhost//')
@celery.task
def add(x, y):
return x + y
@celery.on_after_configure.connect
def add_periodic(**kwargs):
celery.add_periodic_task(10.0, add.s(2,3), name='add every 10') I start the beat service as follows:
|
Hi, I have the same issue. But I try to start celery programmatically in a thread. maybe it is the cause. This is my thread: from __future__ import absolute_import, unicode_literals
import threading
from celery import current_app
from celery.bin import worker
app = current_app._get_current_object()
class CeleryThread(threading.Thread):
def __init__(self):
super(CeleryThread, self).__init__()
self.app = app
self.worker = worker.worker(app=self.app)
self.options = {
'broker': 'amqp://guest:guest@localhost:5672//',
'loglevel': 'INFO',
'traceback': True,
}
app.add_periodic_task(5.0, test.s('hello'), name='add every 10')
def run(self):
self.worker.run(**self.options)
@app.task
def test(args1):
print args1 And the main.py to launch this celery_thread = CeleryThread()
# used to kill the thread when the main program stop
# celery_thread.daemon = True
celery_thread.start() My console output is
Do I forget an option? I can see you have a "scheduler" set in your output @ask Thanks by advance for any help. |
The same config with @liuallen1981 and the same issue. Anyone figures out what's happening ?. For now I have to use
instead of using a |
+1 Also having this issue. |
+1 Also having this issue. |
+1 Also having this issue. |
+1 Also having this issue. Went on and tested with @liuallen1981's code and get the same result as with my own code. Celery: 4.0.2 |
To run periodic tasks, you have to invoke also scheduler when starting a worker using
When using celery in django applications, where tasks are autodiscovered from apps, you need to use References: |
-B is not for production and simply starts the Beats scheduler which at least in my case is already running. |
+1 having the same issue with Celery(4.0.2) |
Same issue here.... |
you just start a beat service, should also start a worker to do the task. |
+1 |
same issue here |
same issue here, and I try to print something inside the callback, seems the callback haven't been called, but the RabbitMQ is working (works fine when I trigger a task in code ) @celery.on_after_configure.connect
def setup_periodic_tasks(**kwargs):
print('after connect')
|
I use Celery config |
I stepped through the library and found that my signal listener was being created/attached after the I reasoned that Django's app ready signal would probably execute after Celery configuration and it is working well for me so far. NOTE: I am not sure what celery configuration actually entails and whether it is possible that app.ready could fire before Celery is configured... however, I expect it would at least throw a runtime error. Sample code from my
Note you also need to point
A good approach or fix would probably be to write a new decorator that 1) checks if Celery is already configured and if so executes immediately and 2) if Celery is not configured adds the listener using the As it stands, the docs are problematic since so many of us ran into this issue. CCing @rustanacexd @viennadd just so you can try this fix if you still need to dynamically schedule tasks? |
Putting my two cents out there, I got bit by this and ended up having to reorder some of my tasks. We have about 8 scheduled tasks that are supposed to fire, however, I noticed that the following would happen: Example:
Ordering them like this means that
We're also using some Maybe this kind of behavior is mentioned in the documentation, or I'm going down a different rabbit hole, though this behavior doesn't make much sense. For reference, we're using Redis instead of RMQ and celery 4.1.0. |
I was able to make this work. Check my answer here: |
@prasanna-balaraman That does seem to work, thank you for the suggestion! |
Same issue for me : I will test the another solution : https://stackoverflow.com/a/41119054/6149867 |
closing. if it still appears and any one have any code or docs suggestions plz feel free to send a pr referencing this issue. |
It took me a while to realize that if there is any exception raised in setup_periodic_tasks, it will get silently suppressed. The function is called here: https://github.com/celery/celery/blob/master/celery/app/base.py#L950 If anything goes wrong, the exception is only saved in responses, no re-raise or log: So my suggestion is to keep setup_periodic_tasks as simple as possible. |
@chiang831 do you have any suggestions to improve it? if so plz send a pr or open a discussion on celery-users mailing list |
Defining them in @app.on_after_finalize.connect
def app_ready(**kwargs):
"""
Called once after app has been finalized.
"""
sender = kwargs.get('sender')
# periodic tasks
speed = 15
sender.add_periodic_task(speed, update_leases.s(),
name='update leases every {} seconds'.format(speed)) |
Just ran into this and none of the previous solutions worked for me. The exact scenarios that cause this are confusing and rely on the behavior of ref-counting/gc and the exact lifetimes of your decorated functions. Signal.connect by default only holds a weak reference to the signal handler. This makes sense for other use cases of the Signal object (a short lived object that wires signals shouldn't be held alive by its signal handlers), but is very surprising in this case. My specific use case was a decorator to make it easy to add new periodic tasks: def call_every_5_min(task):
@app.on_after_configure.connect
def register_task(sender, **_):
sender.add_periodic_task(collect_every_m*60, task.signature())
@call_every_5_min
@task
def my_celery_task(_):
pass The fix is to explicitly ask for a strong reference: def call_every_5_min(task):
def register_task(sender, **_):
sender.add_periodic_task(collect_every_m*60, task.signature())
app.on_after_configure.connect(register_task, weak=False) The example in the docs only works if your decorated function is at module or class scope, in which case the module or class continues to hold a strong reference to the function. Otherwise the only strong reference will die at the end of the scope it's defined in. I recommend changing the docs to pass |
My process of |
This works for me instead. |
if you're trying to solve this issue then reboot your docker engine first, it may be signals system bug |
should we close this issue as not a bug? |
@auvipy not sure. Looks like it's a celery bug |
It is a bug we must fix. |
+1 Tried all above solutions for celery beat v5.1.0, nothing worked. |
Came across this when I was debugging an issue with one of my apps. I believe my problem was caused by the use of the
The line with From a brief read of the comments here and over at #5045, it seems like the issues are a mix of the following:
Is there anything I'm missing? It seems like the work to do is either a few minor changes to doco, or a more significant change to the signals to change one-time signals into something more like event flags. |
I gave up on trying to make Eventually I tried One weird thing to note though, perhaps it could help someone understand what's going on: I was using My original non-working setup, just showing the stuff related to celery: Structure
requirements.txt celery[redis]==5.1.1 Dockerfile FROM python:3.9.5-slim-buster
ENV PYTHONDONTWRITEBYTECODE=1 PYTHONUNBUFFERED=1
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . . docker-compose.yml version: "3.9"
services:
redis:
image: redis:6.2.4-alpine
volumes:
- /tmp/redis:/data
celery:
build: .
command: celery -A project.celery.app worker -l info --uid=nobody --gid=nogroup
depends_on:
- redis
celery-beat:
build: .
command: celery -A project.celery.app beat -l info --uid=nobody --gid=nogroup -s /tmp/celerybeat-schedule
volumes:
- /tmp:/tmp
depends_on:
- celery project/celery/app.py from celery import Celery
app = Celery(
"tasks",
broker=REDIS_CONN_STRING,
backend=REDIS_CONN_STRING,
include=[
"project.celery.tasks.some_tasks",
"project.celery.tasks.some_other_tasks",
"project.celery.tasks.more_tasks",
],
) project/celery/scheduler.py from .app import app
from .tasks.some_tasks import a_task
@app.on_after_finalize.connect
def setup_periodic_tasks(sender, **kwargs):
sender.add_periodic_task(5.0, a_task.s(), name="a task") project/celery/tasks/some_tasks.py import logging
from ..app import app
@app.task
def a_task():
logging.info("hello") My current working setup, just showing the changes: project/celery/app.py from celery import Celery
app = Celery(
"tasks",
broker=REDIS_CONN_STRING,
backend=REDIS_CONN_STRING,
include=[
"project.celery.tasks.some_tasks",
"project.celery.tasks.some_other_tasks",
"project.celery.tasks.more_tasks",
],
)
app.conf.beat_schedule = {
"a task": {
"task": "project.celery.tasks.some_tasks.a_task",
"schedule": 5.0,
},
} project/celery/scheduler.py deleted |
Thank you @ggregoire. I had an issue where the |
I had the problem with I defined
It turned out that Hope this helps somebody. |
I had:
For me personally, the fix was to move
In case it helps someone. |
It also having this issue in celery 5.2.7 I fixed it.(#7652) |
closing in favor of this. |
Checklist
celery -A proj report
in the issue.(if you are not able to do this, then at least specify the Celery
version affected).
master
branch of Celery.Steps to reproduce
tasks.py
step 1: rabbitmq is up
rabbitmq 1186 1 0 Nov12 ? 00:00:00 /bin/sh /usr/sbin/rabbitmq-server
step2: execute tasks.py
python tasks.py
step3: start beat worker
celery -A tasks -l info beat
celery beat v4.0.0 (latentcall) is starting.
__ - ... __ - _
LocalTime -> 2016-11-12 17:37:58
Configuration ->
. broker -> amqp://guest:**@localhost:5672//
. loader -> celery.loaders.app.AppLoader
. scheduler -> celery.beat.PersistentScheduler
. db -> celerybeat-schedule
. logfile -> [stderr]@%INFO
. maxinterval -> 5.00 minutes (300s)
[2016-11-12 17:37:58,912: INFO/MainProcess] beat: Starting...
Expected behavior
I expect the scheduler to trigger add() function every ten seconds.
Actual behavior
The add() function doesn't get triggered.
I don't see any exception in the terminal. Do I miss anything?
The text was updated successfully, but these errors were encountered: