Memory leak in Celery Beat 5.3.4 process #8645
Replies: 4 comments
-
can you use memray and track down the root of memory leaking codes? |
Beta Was this translation helpful? Give feedback.
-
@auvipy I had a very hard time getting Memray to both start and attach to the the process with Memray's cmdline syntax and all the args we send to our beat process. But I was able to attach to an existing running beat process after much tinkering. So you will see in the screenshot that the total heap is only about 4MB as that is what grew in a couple of hours after starting the process. Actual memory allocation on the host is 87MB resident. Throughout the entire trace period |
Beta Was this translation helpful? Give feedback.
-
I let that same live trace run overnight. You can see the leak progression over time. It's slow, but eventually the host hits 100% mem usage after many days. |
Beta Was this translation helpful? Give feedback.
-
@auvipy I let that trace run all weekend. It's slow, but it keeps climbing. Any thoughts on how to mitigate this? |
Beta Was this translation helpful? Give feedback.
-
As others seem to be reporting here, there is a pretty obvious memory leak in Beat 5.3.4. I am running on python 3.8.16 and Django 4.2.6. We see this across a number of our apps, and when running on a local dev environment. It's very easy to re-create: just start up Django via gunicorn and then start celery beat using the django database scheduler. If you watch the memory allocated to the beat process, it sits there and just keeps climbing. I only have 1 periodic task in django that literally just returns "OK" every 1 minute, and even without that, the memory usage of beat constantly climbs.
We also have a celery 4.4.2 environment where this isn't an issue.
I am running memray on my local now to capture. But is that really needed if this is so easily reproducible?
Beta Was this translation helpful? Give feedback.
All reactions