New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow execution (in general) #2076
Comments
how it s fuzzed? can it be profiled? |
Running one of the sample tests,
However, if I take that test, and change
What geth, besu, nethermind and now eels have implemented, is a batch-mode. Where client client is fed paths via To demonstrate what I mean: if you had that, I could rewrite the for-loop above as:
It was initially implemented to get around virtual-machine boostrap times (nethermind / besu), but with kzg init, I added it to geth too. |
But there's something more too, I think. Because even if I don't use the batch-mode in geth, it's still a lot faster:
Of course, batch-mode is faster still:
|
Can I have both the Shanghai and Cancun test file? I want to try to see what kind of regression and how to improve it. |
It's in here: https://github.com/holiman/goevmlab/tree/master/evms/testdata/cases . diff 00000936-mixed-1.json 00000936-mixed-1.json.cancun
53c53
< "Shanghai": [
---
> "Cancun": [ |
Earlier, nimbus-eth1 was one of the fastest evms, but in the last months, it has been by far the slowest.
Here are some stats from a 90 hours of fuzzing
All other clients performed over 1M tests, nimbus-eth1 only 600K. The slowest of the other clients was nearly double as fast as nimbus-eth1.
I suspect that some regression has been introduced (possibly some kzg initialization?) which adds overhead on startup.
The text was updated successfully, but these errors were encountered: