New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dynamically calculate performables limit based on perform data size #305
Dynamically calculate performables limit based on perform data size #305
Conversation
@@ -163,6 +163,21 @@ type CheckResult struct { | |||
RetryInterval time.Duration | |||
} | |||
|
|||
// Size returns the size of the check result in bytes | |||
func (cr *CheckResult) Size() int { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is estimating the size of the EVM encoded result, right? We have tried this a bit before and the function becomes full of magic numbers. Also, this becomes brittle when changing encoding of the CheckResult
, which is in the core code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that all the numbers have a corresponding comment to specify what field they represent.
The most naive option is to encode the object and count the bytes, but that would be expensive in runtime (encoding/decoding is currently taking dozens percentage of CPU)
The CheckResult currently still resides here, once moved to cl-common I'll adjust.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 don't quite like this hardcoded size counting as historically this as bitten us in the past.
- Anytime the struct is changed it creates an implicit dependency to change which is very easy to forget
- If encoding type is changed then this needs to change
- This will likely need some magic numbers to match exact json encoding size then just adding the byte size of fields
What if we estimated size in the check pipeline instead of in the plugin? In this way, size just becomes a number and the size estimation logic is closer to the encoding functions. It would make the process less brittle. |
This change refers to Observation and Outcome which are encoded within the plugin as they are used by OCR (no EVM encoding). |
Ahh ok. That makes sense. This is for byte limits of Observation and Outcome. |
// TODO: remove this in next version, it's a temporary fix for | ||
// supporting old nodes that will limit the number of results rather than the size | ||
// of the outcome |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to think more deeply, but i guess this will be a breaking change even if we remove this in the next version?
I think the ideal way to make this non breaking is to do it with a feature flag that is then turned on at once for all nodes (via config), but I think for this specific change we may just be ok to just take a breaking risk as we don't expect more than 50 performables in a round for current traffic
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to think more deeply, but i guess this will be a breaking change even if we remove this in the next version?
In case all ops will update to this version, where the nodes will produce "limited" Outcome/Observation but won't validate that, they can update to next version where we'll start producing "unlimited" Outcome/Observation as the validation will accept those
for this specific change we may just be ok to just take a breaking risk as we don't expect more than 50 performables in a round for current traffic
OK make sense
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
they can update to next version where we'll start producing "unlimited" Outcome/Observation as the validation will accept those
Yes the validation would pass but outcome consensus would still not happen if some NOPs produce limited outcome and some with more
AUTO-7597
Description
The idea is to put the maximal number of performables in observation/outcome dynamically calculate it based on performData and what fits into observation / outcome size, rather than using a fixed limit of number of upkeeps based on hardcoded limits.
Changes