-
Notifications
You must be signed in to change notification settings - Fork 351
Upgrade : kvdb-*, trie-db, memory-db, hash-db, parking_lot #332
base: dev
Are you sure you want to change the base?
Conversation
caf7801
to
efd2bf4
Compare
85ba1f1
to
d8a75ff
Compare
First test, using new rocksdb without setting limit to walgraph and producing graph. will set max wals option and make another graph |
Just to muddy the waters a bit... :) I've come to suspect lately that the performance gains seen with this upgrade may not be solely due to |
I don't see the environment specs and how node was executed. Is it in container? Could you please share specs? Thx! |
@ilia-falconix, it is a bare machine, but in the end, the spec does not affect db insertion/deletion and it should not influence how db gets to 750gb, in my opinion something else is in question. |
Here's my experience: I started syncing with rocksdb upgraded client at block #11764407, db was very close to 400GB. It's a fatdb, but otherwise pretty vanilla. It's now fully synced 436GB sans wal. |
I used 40GB for max_wal. |
related? #335 |
I m sure there could be peformance increase if the underlying Rocksdb version is 6.5.2 facebook/rocksdb@e3a82bb which was pushed one year ago. I requires a very recent kernel, and to be enabled at build time (it is disabled by default). And to be enabled at runtime through the |
I would not be that much entertained about increasing cache size. Increasing cache size is only profitable if per thread performance is faster than disk cache. Larger caches takes more cpu time at searching in the cache. I actually saw increased block added peformance by decreasing database cache size on Intel Celeron or aarch64 cpu despite the cache still leaving free memory. |
I've been running a version with upgraded kvdb, memory-db... libs for the past couple of months and have to say I am getting a lot of OOM kills, one every few days at least. Somehow the memory limits are not respected. So this should definitely not be considered yet for inclusion to main tree. I think it would be interesting to try to only upgrade parity libs, but keeping the current rocksdb version. I am trying to do that right now, but it's a mess, since all these libs are very much entangled one with another. |
disagree. In a couple of months (especially with the planned higher block gas limit) syncing speed of OpenEthereum full archive nodes with 6Ghz cpu will fall behind blockchain growth speed even when storing the whole database on tmpfs. It was already cut by 2 since the last December on my 4.5Ghz cpu going down to 1.20 block or 15Mgas per second. @mdben1247 how much memory do you have? I didn’t notice unusual rises above a specific number of times the memory limit on my side. Di you try to set even lower memory limits? |
It's not a problem that it uses a lot of ram. The problem is the limits set in configuration were not honored. The client is configured to use 25Gb. It gets killed when its actual usage exceeds 60Gb. Other than that, I agree, it would be great to have all new fancy features of rockdb. Also, there are optimizations to be had outside of db. Such as caching, flat database layout (as in turbogeth), but that's another story and a lot of work to get there. Anyhow, this particular PR does not seem production ready to me. |
@mdben1247 on the other end with the current version, the database often ends up corrupted (invalid checksum) by just letting OpenEthereum running. This isn t happening in the new version. On the other end, the limit ends up being honoured. But just several times the value. Did you tried with 1Gib? |
This has been running well for about a day, merged with 3.2.6. There's a PR for that merge for rakita's repo. |
And anyway, even if it would happen. I find it better to have out of memory errors with the new version rather than crashed databases because of invalid writes with the old version. |
@mdben1247 see also #416 |
Are any of you running this on mainnet yet? |
At this point I am likely to not attempt it on mainnet. |
I have been running a bit modified, rebased to 3.2 version on mainnet in a heavy load environment for the past two months. It runs well, fast, but there are memory issues I reported: peak memory usage much higher (ie. at least 2-3x more) than the limits specified in config. |
No description provided.