-
-
Notifications
You must be signed in to change notification settings - Fork 568
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TRACY_ON_DEMAND consumes memory infinitely #789
Comments
The deferred queue should, by design, hold a small number of items needed to properly set up the initial state of things created in the past that newly arriving messages can reference. What kind of things do you see being put there in excess? You can check this by breaking in |
The idea was to launch a service (without connecting to Tracy) and connect to it only when something happens. There is a chance that Tracy server will connect to the profiled process after hours (or even days) of work. |
Maybe you can try to start Tracy manually using TRACY_MANUAL_LIFETIME, then shut down Tracy and relaunch it after a while? |
Hi, @wolfpld!
I'm developing an app which:
TRACY_ON_DEMAND
was a lifesaver for me but when process lives for a long time thetracy::Profiler::m_deferredQueue
grows infinitely and consumes too much memory. Do you have any ideas how to limit this queue? E.g. some smart clean-up which removes the oldest events and guarantees that remaining ones won't causeDiscontinuous frame begin/end mismatch.
.Right now it's a private field without any mechanism to get access to it or clean it up via Tracy.
P.S. Thanks for the cool tool!
The text was updated successfully, but these errors were encountered: