Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High memory usage #81

Open
mardy opened this issue Nov 23, 2018 · 6 comments
Open

High memory usage #81

mardy opened this issue Nov 23, 2018 · 6 comments
Assignees

Comments

@mardy
Copy link

mardy commented Nov 23, 2018

Hi there!
I'm running a performance comparison between several databases, using the ioarena benchmarking tool. I'm running the tool with the following parameters:

valgrind --tool=massif ioarena -m sync -D unqlite -v 2048 -B set -n 10000

What I found surprising, is that memory usage grows linearly, up to 83.6 MB, which seems quite a lot compared with other DB engines run in the same benchmark (upscaledb: 4.3 MB, sqlite: 4.0 MB, rocksdb: 27.8 MB). I wrote the unqlite driver for ioarena myself, so it might be possible the the problem is in the driver; however, running valgrind with the leak-check tool doesn't report any leaks (it appears that all the RAM is properly freed when the DB is closed).

Given that the FAQ states that the DB should also be usable in embedded devices, I wonder if such a high memory usage could be due to some bug.

@symisc
Copy link
Owner

symisc commented Nov 23, 2018

This is a cache related issue. Because nothing reach the disk surface until the handle is closed or a transaction is committed. You can purge the cache by manually committing the transaction via unqlite_commit() each time you reach certain threshold (i.e per 10K insertions).

@mardy
Copy link
Author

mardy commented Nov 26, 2018

That doesn't seem to work: even if I run

valgrind --tool=massif ./src/ioarena -m sync -D unqlite -v 2048 -B batch -n 10000

which periodically calls unqlite_commit(), the memory usage is not decreasing.

@kmvanbrunt
Copy link

@mardy It sounds like you are seeing the same behavior I reported in issue #70.
I ended up having to call unqlite_close() periodically to work around this.

@DisableAsync
Copy link

I wonder if insertions after calling unqlite_commit() will increase memory usage, or will it be kept in the previous 83.6 MB in the test?

If it will be kept in the previous allocated memory, then it's not much a deal, we just have to call unqlite_commit() more frequently?

@symisc
Copy link
Owner

symisc commented Sep 13, 2019

Yes, you have to understand that UnQLite keep some of the freed memory in a internal pool before releasing them to the OS. We do this so that successive read, write operations does no request new memory blocks again from the underlying OS which is very costly.
To answer your question, unqlite_commit() will not increase memory usage, in fact it should release definitively memory of dirty pages synced to disk and keep the rest in the pool. So yes, you should call unqlite_commit() periodically lets say for example each 5 minutes or each 100K insertion.
unqlite_close() on the other side will free all the allocated memory (you can use Valgrind to confirm this).

@hekaikai
Copy link

There is an error in the reference count that frees the page object in the page_unref function.
The original code first took the reference count and then released the reference count, resulting in a reference count that was not 0, so the Page page object was unable to free up excessive memory.
nRef = pPage->nRef--;
The right way to do this should be to solve the problem of excessive memory consumption.
nRef = --pPage->nRef;

@symisc symisc self-assigned this Mar 28, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants