-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Introduce TRANSACTION_SPEND_LIMIT
.
#2368
Conversation
@@ -104,6 +104,8 @@ pub trait Network: | |||
const MAX_DEPLOYMENT_LIMIT: u64 = 1 << 20; // 1,048,576 constraints | |||
/// The maximum number of microcredits that can be spent as a fee. | |||
const MAX_FEE: u64 = 1_000_000_000_000_000; | |||
/// The maximum number of microcredits that can be spent on a finalize block. | |||
const TRANSACTION_SPEND_LIMIT: u64 = 100_000_000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@iamalwaysuncomfortable likely has better intuition on how to back into this number with more data.
Even a high level understanding of how many hashes, standard operations, etc. it takes to hit this limit will be valuable for developers. Also would be nice to know that we aren't setting a number too conservatively that blocks program expressivity and usage.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This definitely seems a little overly conservative. Thus far about 200k credit spends (which is 3 orders of magnitude higher than this) seem to be where the network has issues, but it deserves further testing with other opcodes to see where this lies.
Let me run some extra tests that try 1k & 10k and 100k spends respectively to see if we can get solid measure of where the boundary of performance issues lies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Janky analysis incoming.
I think the two main considerations for a limitation on finalize are:
1. prevent preposterous blocktimes.
The performance implications of large finalize blocks are simpler than deployments: they run serially and single threaded and only in speculate.
- A heavy hash program costing 1_330 credits containing 2_000 triple nested array BHP hashes will take 70 seconds.
- A big program costing 30_000 credits containing only simple opcodes will take 150 seconds.
So if the entire expected public monthly allocation of 35_000 is spent, we get a block of up to 30 minutes and afterwards we're cool. This doesn't sound too bad.
To be conservative, a maximum runtime of 7.5 seconds for a finalize block limits how much a single actor can cause a big blocktime spike. Which would bring us to a rounded limit of around 100 credits (or 100M microcredits).
2. do not hurt expressivity too much.
Expressivity of on-chain programs is already extremely limited because of the deployment limit. So we can easily decide to lower it to:
- 10M microcredits: 15 times
hash.BHP [[[u8; 16u32]; 16u32]; 8u32]
or 5000 simple opcodes - 1M microcredits: 1.5 times
hash.BHP [[[u8; 16u32]; 16u32]; 8u32]
or 500 simple opcodes
I vote for 10M microcredits
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The intention of the finalize block is not to have Ethereum style program execution, but rather to allow for applications that require some degree of peer-to-peer shared resources to be able to settle in limited fashion.
The purpose of this limit is to serve as a forcing function to move as much computation that is possible off-chain so that only requisite state is passed through to the finalize scope on-chain.
In terms of my response to the analysis above:
1. prevent preposterous block times
we get a block of up to 30 minutes and afterwards we're cool. This doesn't sound too bad.
That sounds incredibly bad. Being able to repeatedly execute a program that can slow down block times to 30 minutes is a bug, not a feature. One of the purposes of the current limit is to prevent this exact case.
2. do not hurt expressivity too much
I am fine with 10M microcredits, but I think 100M microcredits in its current form is adequate.
As a side remark: this is a good example for the importance of setting limits now rather than later. It will be much hard to introduce limits that are well-intentioned after the fact.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A heavy hash program costing 1_330 credits containing 2_000 triple nested array BHP hashes will take 70 seconds.
A big program costing 30_000 credits containing only simple opcodes will take 150 seconds.
@vicsn We need to update snarkVM pricing for these cases. Any execution that takes more than 1 second needs to be priced in accordingly to prevent this exact case.
From my perspective, in light of needing this limit sooner rather than later (for safety reasons), I have made the decision to merge this PR in first, and discussed with @d0cd and @raychu86 that we should continue iterating on the exact number of credits with @vicsn and @iamalwaysuncomfortable until we empirically find the appropriate value. I strongly believe the fees will need to be repriced as part of this discovery process. For now, LGTM until a future PR updates this value. |
This PR
TRANSACTION_SPEND_LIMIT
, the maximum amount of microcredits a finalize block can cost.verify_deployment
requiring that all finalize blocks in a program are under this limit.