Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix 'value out of range' RangeDefect in block witness generation. #2072

Merged
merged 4 commits into from Mar 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 4 additions & 0 deletions nimbus/db/ledger/accounts_cache.nim
Expand Up @@ -694,6 +694,10 @@ proc makeMultiKeys*(ac: AccountsCache): MultiKeysRef =
result.add(k, v.codeTouched, multiKeys(v.storageKeys))
result.sort()

# reset the witness cache after collecting the witness data
# so it is clean before executing the next block
ac.witnessCache.clear()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to clear the witnessData just after collecting the keys, not during the AccountCache.persist because otherwise the data would be cleared before being accessed.


proc accessList*(ac: AccountsCache, address: EthAddress) {.inline.} =
ac.savePoint.accessList.add(address)

Expand Down
2 changes: 1 addition & 1 deletion stateless/multi_keys.nim
Expand Up @@ -32,7 +32,7 @@ type
MultiKeysRef* = ref MultiKeys

Group* = object
first*, last*: int16
first*, last*: int
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So even with the clearing of the cache it would still be possible to hit the int16.high?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, actually in the scenario which I tested we don't hit the limit when processing only a single block or clearing the cache between blocks. Having said that, I think it's worth to leave this fix in so that we don't need to worry about it in the future. 32767 isn't that large and I wonder if there might be any large blocks which could still trigger the issue when processing only a single block.


BranchGroup* = object
mask*: uint
Expand Down