You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We should have some simple strategies to make sure that we're not missing data in our blocks/transactions/traces datasets.
FWIW, FYI
The Goldsky folks do some pretty compute heavy QA, including recomputing merkle trees and whatnot to make sure they don't miss anything. They said they have some lighter-weight scripts they could share regarding comparing block headers.
The text was updated successfully, but these errors were encountered:
would you be doing this in bigquery/outside of the file layer? if so, the easiest one is to check distinct block number counts and min/max block numbers for blocks and any other dataset - the other thing you can do is look at - another thing to do is to check the transaction count in blocks and compare that to the number of transactions with that block hash
for traces, maybe compare distinct transaction hashes between traces and transactions
What is it?
We should have some simple strategies to make sure that we're not missing data in our blocks/transactions/traces datasets.
FWIW, FYI
The Goldsky folks do some pretty compute heavy QA, including recomputing merkle trees and whatnot to make sure they don't miss anything. They said they have some lighter-weight scripts they could share regarding comparing block headers.
The text was updated successfully, but these errors were encountered: