Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Celery Beat Periodic Task Duplicated Incessantly #4041

Closed
Jarch09 opened this issue May 20, 2017 · 37 comments
Closed

Celery Beat Periodic Task Duplicated Incessantly #4041

Jarch09 opened this issue May 20, 2017 · 37 comments

Comments

@Jarch09
Copy link

Jarch09 commented May 20, 2017

Celery 4.0.0
Python 3.5.2
(rest of report output below)

I recently pushed celery beat to production (I had been using celery without beat with no issues for several months). It had 1 task to run several times daily, but it duplicated the task many times per second, gradually overwhelming the capacity of my small elasticache instance and resulting in OOM error.

I was searching for similar errors online and found the following:
#943 (comment)

While not exactly the same, changing timezone back to default settings seemed to solve the issue for now. To be clear, previously I had:
enable_utc = False
timezone = 'America/New_York'

I changed these to:
enable_utc = True

and it seemed to solve the problem (for now).

** celery -A [proj] report:

software -> celery:4.0.0 (latentcall) kombu:4.0.0 py:3.5.2
billiard:3.5.0.2 redis:2.10.5
platform -> system:Linux arch:64bit, ELF imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:redis://[elasticache]

result_persistent: False
enable_utc: False
result_serializer: 'json'
include: [tasks -- redacted]
result_backend: 'redis://[elasticache]
task_create_missing_queues: True
task_acks_late: False
timezone: 'America/New_York'
broker_url: '[elasticache]'
task_serializer: 'json'
crontab: <class 'celery.schedules.crontab'>
broker_transport_options: {
'visibility_timeout': 600}
task_time_limit: 200
task_always_eager: False
task_queues:
(<unbound Queue default -> <unbound Exchange default(direct)> -> default>,
<unbound Queue render -> <unbound Exchange media(direct)> -> media.render>,
<unbound Queue emails -> <unbound Exchange media(direct)> -> media.emails>,
<unbound Queue other -> <unbound Exchange media(direct)> -> media.other>)
beat_schedule: {
'lookup-emails-to-send': { 'args': [],
'schedule': <crontab: 0 9,12,15,18,21 * * * (m/h/d/dM/MY)>,
'task': 'send_emails'}}
accept_content: ['json']
task_default_exchange_type: 'direct'
task_routes:
('worker.routes.AppRouter',)
worker_prefetch_multiplier: 1
BROKER_URL: [elasticache]

** Worker error message after OOM:
Traceback (most recent call last):
File "python3.5/site-packages/celery/beat.py", line 299, in apply_async
**entry.options)
File "python3.5/site-packages/celery/app/task.py", line 536, in apply_async
**options
File "python3.5/site-packages/celery/app/base.py", line 717, in send_task
amqp.send_task_message(P, name, message, **options)
File "python3.5/site-packages/celery/app/amqp.py", line 554, in send_task_message
**properties
File "python3.5/site-packages/kombu/messaging.py", line 178, in publish
exchange_name, declare,
File "python3.5/site-packages/kombu/connection.py", line 527, in _ensured
errback and errback(exc, 0)
File "python3.5/contextlib.py", line 77, in exit
self.gen.throw(type, value, traceback)
File "python3.5/site-packages/kombu/connection.py", line 419, in _reraise_as_library_errors
sys.exc_info()[2])
File "python3.5/site-packages/vine/five.py", line 175, in reraise
raise value.with_traceback(tb)
File "python3.5/site-packages/kombu/connection.py", line 414, in _reraise_as_library_errors
yield
File "python3.5/site-packages/kombu/connection.py", line 494, in _ensured
return fun(*args, **kwargs)
File "python3.5/site-packages/kombu/messaging.py", line 200, in _publish
mandatory=mandatory, immediate=immediate,
File "python3.5/site-packages/kombu/transport/virtual/base.py", line 608, in basic_publish
return self._put(routing_key, message, **kwargs)
File "python3.5/site-packages/kombu/transport/redis.py", line 766, in _put
client.lpush(self._q_for_pri(queue, pri), dumps(message))
File "python3.5/site-packages/redis/client.py", line 1227, in lpush
return self.execute_command('LPUSH', name, *values)
File "python3.5/site-packages/redis/client.py", line 573, in execute_command
return self.parse_response(connection, command_name, **options)
File "python3.5/site-packages/redis/client.py", line 585, in parse_response
response = connection.read_response()
File "python3.5/site-packages/redis/connection.py", line 582, in read_response
raise response
kombu.exceptions.OperationalError: OOM command not allowed when used memory > 'maxmemory'.

** Example log showing rapidly duplicated beat tasks:
[2017-05-20 09:00:00,096: INFO/PoolWorker-1] Task send_emails[1d24250f-c254-4fd8-8ffc-cfb17b98a392] succeeded in 0.01704882364720106s: True
[2017-05-20 09:00:00,098: INFO/MainProcess] Received task: send_emails[e968cc65-0924-4d31-b2d7-9a3e3f5aefda]
[2017-05-20 09:00:00,115: INFO/PoolWorker-1] Task send_emails[e968cc65-0823-4d31-b2d7-9a3e3f5aefda] succeeded in 0.015948095358908176s: True
[2017-05-20 09:00:00,117: INFO/MainProcess] Received task: send_emails[2fb9f7ac-5f6a-42e1-813e-25ea4023dc81]
[2017-05-20 09:00:00,136: INFO/PoolWorker-1] Task send_emails[2fb9f7ac-5f6a-42e1-813e-25ea4023dc81] succeeded in 0.018534105271100998s: True
[2017-05-20 09:00:00,140: INFO/MainProcess] Received task: send_emails[59493c0b-1858-497f-a0dc-a4c8e4ba3a63]
[2017-05-20 09:00:00,161: INFO/PoolWorker-1] Task send_emails[59493c0b-1858-497f-a0dc-a4c8e4ba3a63] succeeded in 0.020539570599794388s: True
[2017-05-20 09:00:00,164: INFO/MainProcess] Received task: send_emails[3aac612f-75dd-4530-9b55-1e288bec8db4]
[2017-05-20 09:00:00,205: INFO/PoolWorker-1] Task send_emails[3aac612f-75dd-4530-9b55-1e288bec8db4] succeeded in 0.04063933063298464s: True
[2017-05-20 09:00:00,208: INFO/MainProcess] Received task: send_emails[7edf60cb-d1c4-4ae5-af10-03414da672fa]

@georgepsarakis
Copy link
Contributor

This may be the same timezone issue that was fixed with this Pull Request. Could you try with the current master branch?

@marco-silva0000
Copy link

marco-silva0000 commented Jul 28, 2017

i have this exact bug on te latest master version(of today), i have just made a fork of celery to create an example project with django celery beat and to replecate this issue. marco-silva0000@4c99c06

I am not sure what is the core reason for this problem, but from what i've tested so far, the is_due calculation has a last_run_at well calculated, but self.remaining_estimate(last_run_at) returns a negative value because of timezone diferences, in my case -1 day, 23:45:24.982145. this repeats for ever and will lunch tasks for ever. this is on the latest pip version, on the latest master version there is still a calculation bug as it schedules a task configured for 10m to 1h+10m because of timezone diferences.
I have this problem when setting django timezone to lisbon and celery timezone to lisbon as well. if i set celery timezone to UTC my problem is fixed.

@pztrick
Copy link

pztrick commented Aug 8, 2017

I also encountered this bug. It is caused by re-offsetting an already-offsetted/timezone-aware datetime and fixed in celery/django-celery-beat@2312ab5.

Cherry-picking celery/django-celery-beat@2312ab5 onto master fixes it.

@marco-silva0000
Copy link

marco-silva0000 commented Aug 8, 2017 via email

@gabriellima
Copy link

gabriellima commented Jan 23, 2018

I'm having this issue, using celery 4.1.0.

Kombu 4.1.0
billiard 3.5.0.3
vine 1.1.4

If I let settings as below, many tasks are generated each second.

Even more, due periodic tasks are also sent as many per second.

Only solution for now is to set CELERY_ENABLE_UTC to True and updating periodic tasks using crontab to UTC hours, minutes, etc.

CELERY_TIMEZONE = 'America/Sao_Paulo'
CELERY_ENABLE_UTC = False

Tried master branch (v4.2.0) but following error ocurred:

  File "....\site-packages\celery\app\task.py", line 532, in apply_async
    shadow = shadow or self.shadow_name(args, kwargs, options)
celery.beat.SchedulingError: Couldn't apply scheduled task task_test: shadow_name() missing 1 required positional argument: 'options'

Returning to version 4.1.0, this is the scenario:

Example:

@periodic_task(
    run_every=(crontab(minute='25', hour='10')),
    name="task_test",
    ignore_result=True,
    bind=True)
def some_task(self):
    # do something
    return True

Return from celery beat:

[2018-01-23 10:23:28,199: INFO/MainProcess] Scheduler: Sending due task task_test (task_test)
[2018-01-23 10:23:28,199: DEBUG/MainProcess] task_test sent. id->6a7b0c4c-7065-4698-8bd9-525ba0526e63
[2018-01-23 10:23:28,215: INFO/MainProcess] Scheduler: Sending due task task_test (task_test)
[2018-01-23 10:23:28,215: DEBUG/MainProcess] task_test sent. id->b125990a-50dc-4fdf-ac85-2fef793773d0
[2018-01-23 10:23:28,215: INFO/MainProcess] Scheduler: Sending due task task_test (task_test)
[2018-01-23 10:23:28,215: DEBUG/MainProcess] task_test sent. id->feed68da-b37d-41e3-980f-f3f6747ba51f
[2018-01-23 10:23:28,215: INFO/MainProcess] Scheduler: Sending due task task_test (task_test)
[2018-01-23 10:23:28,231: DEBUG/MainProcess] task_test sent. id->b18dfe49-f0a4-4bac-b26e-3d0a33c09c61
[2018-01-23 10:23:28,231: INFO/MainProcess] Scheduler: Sending due task task_test (task_test)
[2018-01-23 10:23:28,231: DEBUG/MainProcess] task_test sent. id->6a34b613-5666-4519-9318-901818d00418
[2018-01-23 10:23:28,231: INFO/MainProcess] Scheduler: Sending due task task_test (task_test)
[2018-01-23 10:23:28,231: DEBUG/MainProcess] task_test sent. id->88e64f75-47c3-42b3-ab9e-f225680d9c30
[2018-01-23 10:23:28,231: INFO/MainProcess] Scheduler: Sending due task task_test (task_test)
[2018-01-23 10:23:28,246: DEBUG/MainProcess] task_test sent. id->70a06e9a-2efa-494f-b4a0-25e7ad6b26fa
[2018-01-23 10:23:28,246: INFO/MainProcess] Scheduler: Sending due task task_test (task_test)
[2018-01-23 10:23:28,246: DEBUG/MainProcess] task_test sent. id->f1a4549b-74aa-4853-a58c-967d4d90be0a
[2018-01-23 10:23:28,246: INFO/MainProcess] Scheduler: Sending due task task_test (task_test)
[2018-01-23 10:23:28,246: DEBUG/MainProcess] task_test sent. id->a0438aad-9d87-418e-ae6e-35cfc66534a9
[2018-01-23 10:23:28,262: INFO/MainProcess] Scheduler: Sending due task task_test (task_test)
[2018-01-23 10:23:28,262: DEBUG/MainProcess] task_test sent. id->c58c7436-466f-49a1-aef0-94f337c71292
[2018-01-23 10:23:28,262: INFO/MainProcess] Scheduler: Sending due task task_test (task_test)

@auvipy
Copy link
Member

auvipy commented Jan 23, 2018

try master branch

@auvipy auvipy added this to the v4.2 milestone Jan 23, 2018
@gabriellima
Copy link

gabriellima commented Jan 23, 2018

Hi @auvipy . Thank you for the return, I've already tried and got the following error:

 File "....\site-packages\celery\app\task.py", line 532, in apply_async
    shadow = shadow or self.shadow_name(args, kwargs, options)
celery.beat.SchedulingError: Couldn't apply scheduled task task_test: shadow_name() missing 1 required positional argument: 'options'

Installed today using pip install https://github.com/celery/celery/zipball/master#egg=celery

Blaming that line returned that a long time ago, it used to be:

preopts = self._get_exec_options()
options = dict(preopts, **options) if options else preopts
#..........
      shadow=shadow or self.shadow_name(args, kwargs, options),
# ........

Would it be solved just by moving line 532 after line 535 ?

image

@georgepsarakis
Copy link
Contributor

This seems to be caused by https://github.com/celery/celery/pull/4381/files . Very weird though, since the signature of shadow_name does have 3 arguments.

@moltman
Copy link

moltman commented Jan 25, 2018

confirming reproducing this in celery 4.1.0, but inconsistently and not for all scheduled tasks.

a task was listed as due multiple times per second and therefore launched about 32000 times before it was caught. the actual schedule is set for daily at midnight UTC and the problem started occurring at 16.20 UTC. Several other schedules have not exhibited this behavior yet and upon restart we have not observed this again (although its only been 24 hours)

we do not set either flag
CELERY_TIMEZONE
CELERY_ENABLE_UTC

  • is it confirmed setting those flags will work around it?
  • is there a fix in master branch (gleaning no from above)?

python 2.7.12
celery 4.1.0
billiard 3.5.0.3
vine 1.1.4

@mikegrima
Copy link

mikegrima commented Feb 1, 2018

Hello! We recently swapped out APScheduler with Celery (v.4.1.0) in Security Monkey, and we are seeing duplicates tasks being scheduled as well.

We use celery beat to add tasks that many worker instances will then execute. Exactly one celery beat instance is running at any time.

Our celeryconfig.py defines enable_utc = True. We define a function that celery beat executes with on_after_configure.connect, and this function is where we schedule in the tasks. This will first call the Celery app's control.purge() function to clear out all existing tasks before adding in new tasks.

Based on my code, it's not clear to me how tasks are getting duplicated. Also, when reviewing our ElastiCache memory usage, whenever we re-run the celery beat instance the available memory clears up, which indicates to me that the purging is working.

Does Celery have the concept of coalescing? Effectively, there should be no more than 1 of the same task scheduled and running at any given time.

I'm still new to using Celery, so it's entirely possible that I made some mistakes, but would greatly appreciate the help.

Running python 2.7.14, celery 4.1.0, and celery[redis] 4.1.0 as well.

@mikegrima
Copy link

Update: it looks like we fixed our issue by specifying timezone = "UTC" in our celeryconfig.py file.

@auvipy auvipy closed this as completed Feb 6, 2018
@mikegrima
Copy link

mikegrima commented Feb 6, 2018

Turns out I spoke too soon. Things were looking better, but we have again noticed more duplicates.

It seems to run well for a while before adding in duplicates.

@zpritcha have you noticed anything else?

@mikegrima
Copy link

mikegrima commented Feb 6, 2018

@thedrow : This graph showcases our ElastiCache usage. (with 4.1.0 -- just updated the scheduler with the latest master so will see how that goes 🤞 )

Around the 12 hour mark is where things get a little crazy. We also noticed an increase over time in Redis usage so there might be some leakage.

screen shot 2018-02-06 at 1 13 41 pm

I'm more than happy to help debug, so please let me know if you want more fancy graphs.

EDIT: It looks like this corresponds with crontab schedules:

sender.add_periodic_task(
    crontab(hour=10, day_of_week="mon-fri"), task_audit.s(account.name, monitor.watcher.index))

https://github.com/Netflix/security_monkey/blob/develop/security_monkey/task_scheduler/beat.py#L59-L60

@mikegrima
Copy link

mikegrima commented Feb 6, 2018

@thedrow To avoid waiting -- I set the clock to run in the next 15 minutes or so. Going to monitor it now and see if we see a huge spike (with latest master).

@zpritcha
Copy link

zpritcha commented Feb 6, 2018

I'm seeing the same issue - the tasks that are supposed to be scheduled for 10AM UTC via crontab are re-scheduled 56 times, leading to an abundance of tasks in the queue. The AWS autoscaling group I have configured to handle the task queue handled the non-crontab tasks 12-13 times (running once per hour or so) until the 10AM scheduled crontab executed which caused those duplicate tasks to be added to the queue.

The screenshot below is from the auto scaling group showing it growing in size then shrinking when the queue size drops -- the constant spike is when the queue shot up to over 100k tasks.

image

@mikegrima
Copy link

mikegrima commented Feb 6, 2018

Latest Celery (4.2.0) is still broken:
elasticache_management_console

As for scale, it's a 2x factor of bytes used for cache.

@auvipy auvipy reopened this Feb 7, 2018
@thedrow
Copy link
Member

thedrow commented Feb 8, 2018

OK so it seems that there are two separate issues here. One with master that we have to resolve before the release and another one which we can resolve later.
I think that we should add a check to see if remaining_estimate() returns a negative number and error if that is the case. It's a far better situation than scheduling tons of duplicated tasks.

@mikegrima
Copy link

@thedrow Pardon my ignorance of Celery. What impact will that have on the application making use of Celery? Will this raise exceptions or otherwise disrupt the beat scheduler and workers?

@thedrow
Copy link
Member

thedrow commented Feb 10, 2018

@mikegrima If we uncovered a bug in our scheduler I'd rather error than produce the behavior the OP is describing. The beat scheduler will be disrupted but the workers will continue to work.

@mikegrima
Copy link

mikegrima commented Feb 11, 2018

To provide an update, removing the cron part eliminates the very large increase, but we are still seeing our Redis cache expand in size over time. This happens regardless of the quantity of workers:

screen shot 2018-02-10 at 5 15 35 pm

We're only using Redis as a message broker. We aren't storing results.

@thedrow
Copy link
Member

thedrow commented Feb 11, 2018

@mikegrima Then that's a different problem. Please search our issues and if you don't find one that matches your problem, open a new issue.

@auvipy
Copy link
Member

auvipy commented Feb 25, 2018

is this a release blocker for 4.2?

@moltman
Copy link

moltman commented Feb 25, 2018

@auvipy I consider it so. It makes the scheduler unusable given that it randomly goes off the hook and launches thousands of jobs when it shouldn't. We've turned it off in our production environment and are working around it, but we'd rather use it.

@johnarnold
Copy link
Contributor

johnarnold commented Feb 26, 2018

Does anyone have a minimal repro of the problem?

Did this break recently, or has it been an issue for a while (i.e. since 4.0)?

@liutuo
Copy link

liutuo commented Feb 28, 2018

I am having the same issue, scheduled task duplicates thousands of times. Any suggestion for a quick workaround such as degrade some libraries?

@liutuo
Copy link

liutuo commented Feb 28, 2018

Just some random trial and error, I am using celery 4.1, django 1.11.7 django-celery-beat 1.1.0, postgresql as db. After change USE_TZ=True, the scheduler seems to work correctly. Hope this can add some value to the discussion.

@thedrow
Copy link
Member

thedrow commented Feb 28, 2018

As far as I'm concerned this is a regression and I can't release without a fix or without reverting the offending PR :(

@dwrpayne
Copy link

dwrpayne commented Mar 2, 2018

I believe this might be a duplicate of #4184, and is fixed in master? It looks similar.

@moltman
Copy link

moltman commented Mar 3, 2018

@dwrpayne I believe gabriellima mentioned on Jan 23 they tried master and it still has the issue. I'll try master again in a test env and see what we find.

@Jarch09
Copy link
Author

Jarch09 commented Mar 4, 2018

Anyone arriving at this issue should just know that changing the timezone setting back to enable_utc = True seems to solve the issue (stop the bleeding).

@moltman
Copy link

moltman commented Mar 4, 2018

@dwrpayne thanks for the info, will be testing master...I thought we tried the enable_utc = True to no avail but will followup

@thedrow
Copy link
Member

thedrow commented Mar 6, 2018

Master no longer contains the regression described by @gabriellima in #4041 (comment)
Can we try again using master and see if the same problem reproduces?

@auvipy
Copy link
Member

auvipy commented Apr 4, 2018

will reopen if it's not fixed by 4.2rc2+

@auvipy auvipy closed this as completed Apr 4, 2018
@aarondiazr
Copy link

Thanks for your advice @liutuo, after change USE_TZ=True the Schenduler working again.
I had the problem after to upgrade from django 1.10 to 1.11 and migrate python 2 to python 3, something happen with the new version of Django and celery 4.1.0...duplicated incessatly, connection broken with rabbitmq. The way I made to work my project again was:

  • Install django-celery-beat==1.1.1
  • Make migrations for the version of django-celery-beat
  • Install celery==4.2.0rc2
  • Change RabbitMQ broker to Redis
  • Add USE_TZ=True to settings.py
  • Update all PeriodicTasks to last_run_at=None

Now my project works again, but in the log appears the following error after execute any PeriodicTask:

[2018-04-04 13:38:00,744: ERROR/MainProcess] Database error while sync: DataError('time zone displacement out of range: "2018-04-05T13:36:55.251839+19:00"\nLINE 1: ...xpires" = NULL, "enabled" = true, "last_run_at" = '2018-04-0...\n ^\n',)
Traceback (most recent call last):
File "/home/admin/Ven/cripto/lib/python3.4/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
psycopg2.DataError: time zone displacement out of range: "2018-04-05T13:36:55.251839+19:00"
LINE 1: ...xpires" = NULL, "enabled" = true, "last_run_at" = '2018-04-0...

This doesn't affect the execution, but if someone knows why that happen, I would appreciate your help !
Sorry for my bad english!

@thedrow
Copy link
Member

thedrow commented Apr 5, 2018

@aarondiazr This looks out of the scope of Celery and is related to your deployment.
Please take the question to StackOverflow.

Also, we can't tell what resolved your issue. Does upgrading to Celery 4.2 alone resolve this issue?

@Sovetnikov
Copy link

I have same issue with
celery 4.2.1
django-celery-beat 1.4.0
Django 2.1.3

Periodic task with interval of every minute constantly started by beat.

beat_1 | [2018-12-19 09:00:54,575: DEBUG/MainProcess] .tasks.server_tasks.tasks_scheduler sent. id->03755bc5-3cfc-408d-928c-69092e8f09cb
beat_1 | [2018-12-19 09:00:54,586: INFO/MainProcess] Scheduler: Sending due task (tasks.server_tasks.tasks_scheduler)
beat_1 | [2018-12-19 09:00:54,588: DEBUG/MainProcess] .tasks.server_tasks.tasks_scheduler sent. id->3ecb6d84-ec1a-4684-bece-90d10c4f2bd8
beat_1 | [2018-12-19 09:00:54,600: INFO/MainProcess] Scheduler: Sending due task (.tasks.server_tasks.tasks_scheduler)
beat_1 | [2018-12-19 09:00:54,602: DEBUG/MainProcess] .tasks.server_tasks.tasks_scheduler sent. id->5981449a-2626-48f1-90ab-0ff6d17ba5de
beat_1 | [2018-12-19 09:00:54,611: INFO/MainProcess] Scheduler: Sending due task (.tasks.server_tasks.tasks_scheduler)

celery report
software -> celery:4.2.1 (windowlicker) kombu:4.2.2 py:3.7.1
billiard:3.5.0.5 redis:3.0.1
platform -> system:Linux arch:64bit imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:django-cache
result_backend: 'django-cache'
task_serializer: 'pickle'
accept_content: ['json', 'pickle']
result_serializer: 'pickle'
timezone: 'Europe/Moscow'
result_expires: datetime.timedelta(days=14)
task_track_started: True
broker_url: 'redis://redis:6379/1'
worker_log_color: False
worker_max_tasks_per_child: 50
worker_prefetch_multiplier: 1

In django settings.py i have only this configuration on timezone:

TIME_ZONE = 'Europe/Moscow'

USE_TZ is False by default

Same configuratio on windows is working fine.

software -> celery:4.2.1 (windowlicker) kombu:4.2.2 py:3.7.1
billiard:3.5.0.5 redis:3.0.1
platform -> system:Windows arch:32bit, WindowsPE imp:CPython
loader -> celery.loaders.app.AppLoader
settings -> transport:redis results:django-cache
result_backend: 'django-cache'
task_serializer: 'pickle'
accept_content: ['json', 'pickle']
result_serializer: 'pickle'
timezone: 'Europe/Moscow'
result_expires: datetime.timedelta(days=14)
task_track_started: True
broker_url: 'redis://localhost:6379/1'
worker_log_color: True
task_always_eager: True
task_eager_propagates: True
worker_max_tasks_per_child: 1
worker_prefetch_multiplier: 1

@Sovetnikov
Copy link

Sovetnikov commented Dec 19, 2018

After debugging my problem is due to different approaches of django and celery when dealing with naive times.
Django depends on USE_TZ (when True naive time is UTC, when False naive time is in local timezone) and celery is always interprets the naive time as UTC.

Apparently this need to be fixed in django-celeru-beat, i have created an issue with some details celery/django-celery-beat#211

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests