New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Export the .release() functions so that user code can explicitly release memory. #17
Comments
Yeah, I've been thinking about this recently too. We need to have a |
I've been playing with For example:
Not sure how this happens, I can try and troubleshoot that. |
Ok, I fixed my segfaults by using the I also tried setting the I'll keep looking into it. |
Can you create a go test that demonstrates the heap growth? |
This test creates a small leak (3.5MB -> 5MB): https://gist.github.com/jeromegn/39ddc09fb694ed9cf5d8293226bd5393 Of course v8's GC is not perfect, and this test is definitely imperfect: it rounds the floats (for the heap) heavily. It still shows an increase though. My current thought is that the I would expect golang to GC these created values, but I'm not sure it's doing it. That's only my current theory, maybe my test isn't meaningful. In the real world, I'm sending about 50K requests over 20 seconds, each request is "serialized" (header, url, method, no body) and created as a value, used as an argument for a function from v8. every 5 seconds, I check the heap, if it's over 32MB (arbitrary limit), I will trigger a GC on v8, which tends to clean up about 20-50% of the heap used, but it has its limits, it won't go lower than 50MB or so. The isolate initially uses 3.5MB of heap. I'm carefully releasing all values I created or got or sent to/from v8 when I'm done with them. I've been able to save some heap in my test by releasing indexes and map keys when creating values. Both Each value created from go is using |
The But that doesn't solve your problems, right? Sounds like we might be leaking some more somewhere else. Before triggering the GC on v8, it's probably worth it to call |
This relieves pressure on the GC since it doesn't have to track these finalizers anymore and we know that there no more references to them. This is an optmization: if v.release() is not called due to an error, it will eventually be called by the GC. Preliminary tests of an upcoming API to query the v8 memory state indicate that this improves the v8.Create performance. See #17 for more information.
Well, with the extra |
FYI, thanks for your test gist -- I've uploaded that to the lowmem_and_leaks branch. I think the HeapStatistics API you added should be added to master, but I'll need to doc it and test it properly first. |
Correct me if I'm wrong :) Just reading through the code and I think that's what's happening. Some of it could be avoided by not making all values persistent. Values used temporarily or "non-root" level (when calling Another way would be to also return all secondary values created by a My test was a bit crude, in a real-world scenario, even if I aggressively release, the heap grows much more than 0.5MB (not evaluating new code.) I do hope we can add HeapStatistics somewhere. It's insanely useful. Am I doing it right in my implementation? I didn't know if it was best to wrap it in something, but I read that fields from the C struct directly was faster somehow. |
We don't make a new Also, each intermediate In theory, we should only create a However, there's still a leak somewhere. 78e1d15 shows that the unique strings are being kept around in v8 somehow. As for HeapStatistics, yes I think you are doing it exactly right -- it's not much data to just copy the whole thing each time. |
Ah, that makes sense, I wasn't sure how that worked exactly.
I can clean it up, write some tests and create a PR. |
re: HeapStatistics, sounds great, please send a PR. Grab all of the values out of the HeapStatistics returned by v8. |
Ah, I implemented The leak was coming from the go callback magic. I was binding a function on every request. These don't go away unfortunately. So instead, I made a generic function that's only bound once per context. Meanwhile, I also implemented I'll send a PR for that at some point, because why not. |
What's the best way to free memory from various interactions with v8?
I noticed memory seemed to grow unbounded in a high concurrency scenario I'm running locally:
I tried setting some
*v8.Value
tonil
after I was done, hoping they would be finalized. Maybe there's a more explicit way of doing that?Locally, I added some basic code for
v8::HeapStatistics
. My code uses accessors which I believe is pretty bad in terms of performance (going through the cgo membrane is "slow"). I'm sure it would be possible to expose it as a transferable C struct. The heap statistics object has a simple data structure.I was also planning on playing with
LowMemoryNotification
andMemoryPressureNotification
to try and force v8 to garbage collect.The text was updated successfully, but these errors were encountered: