Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

trackEvent's state.history grows forever, takes up memory #358

Open
amedee-os opened this issue Feb 2, 2023 · 3 comments
Open

trackEvent's state.history grows forever, takes up memory #358

amedee-os opened this issue Feb 2, 2023 · 3 comments

Comments

@amedee-os
Copy link

Hi, thanks for analyticsjs!

When using it on in a long-running server calling trackEvent after every request, the memory usage of the process grows forever. This shows the process's growing memory use until OOM killed. (On ~1/31 we deployed the fix below).

Screenshot 2023-02-02 at 12 47 15 PM

We found that the state.history variable of the trackReducer keeps a log of every single trackEvent sent. Adding this line after our trackEvent() call made the memory issue go away:

  analytics.getState("track").history = [];

When reading the code, I can't seem to find where state.history is used, or if it's for external code like plugins to read.
In our use case we never read from it, so disabling it would be best, or keeping it capped at a small N if it's used for retries.

I noticed the trackEvent history code has the comment //Todo prevent LARGE arrays. Could we add some kind of config to set a max history length, or to turn it off entirely? Or hard code a limit like 10k?

This might also help memory usage for browser users of analyticsjs with long running sessions.

Would you take a PR to fix this? Which approach would you prefer?

@matjam
Copy link

matjam commented Mar 31, 2023

Nice catch. We're seeing this also.

@matjam
Copy link

matjam commented Mar 31, 2023

Yeah, shipped that workaround but it's not working for us, I'm still seeing this. I guess I'm going to need to actually start digging a bit deeper.

image

You can see its already ticking up at the end. Pre adding the library this service was stable memory wise

image

The drop on 3/29 is from us doubling the container limits.

@DavidWells
Copy link
Owner

Are your instances running serverside in a long running mode process (like express or something)?

we might be able to just disable this functionality on the serverside. It’s meant for clients that usually have short & independent sessions

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants