Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Price in time-intensive finalize scopes #2377

Closed
howardwu opened this issue Mar 2, 2024 · 5 comments
Closed

Price in time-intensive finalize scopes #2377

howardwu opened this issue Mar 2, 2024 · 5 comments
Assignees

Comments

@howardwu
Copy link
Member

howardwu commented Mar 2, 2024

A heavy hash program costing 1_330 credits containing 2_000 triple nested array BHP hashes will take 70 seconds.
A big program costing 30_000 credits containing only simple opcodes will take 150 seconds.

@vicsn We need to update snarkVM pricing for these cases. Any execution that takes more than 1 second needs to be priced in accordingly to prevent this exact case.

Originally posted by @howardwu in #2368 (comment)

@iamalwaysuncomfortable
Copy link
Contributor

A runtime benchmark table for opcodes + pricing table is in the works.

@vicsn
Copy link
Contributor

vicsn commented Mar 6, 2024

EDIT: still syncing with Mike on this.

Hi @pranav and @howard, after syncing with Mike today, we believe we need the following adjustments, let us know if you agree with the direction:
1. Set BLOCK_SPEND_LIMIT to a number of credits equal to the price of a finalize block of 7.5 seconds.
- Suggestion: Mike will still continue analysis on this, but based on our current model it is likely above 100 credits.
2. Limit number of bytes hashed in finalize blocks as defense in depth.
- Suggestion: 4 million bytes, as BHP hashing 4 million bytes takes 2 seconds in Mike's benchmarks.
3. Set a TRANSACTION_SPEND_LIMIT, for UX reasons, so a single transaction can't eat up all the finalize space.
- Suggestion: BLOCK_SPEND_LIMIT / 100.

@iamalwaysuncomfortable
Copy link
Contributor

iamalwaysuncomfortable commented Mar 7, 2024

Basically as priced currently in the SnarkVM code. Opcodes are currently priced such that 1 second of runtime costs 100 credits. This was future proofing to ensure that in the future, people can reasonably afford to use hash functions. However governance will be able to set future prices to ensure network usability, so at the outset hash function prices can be higher as they're most abuse-able opcode.

Raising the per byte cost reasonably prices out using high byte inputs like nested arrays but it still leaves open the ability for people to use spend much less money to extended the block time using a LOT of hashes on smaller byte inputs like field elements. Limiting the # of hashes per execution doesn't solve this as people could send many executions at the maximum limit and achieve the same result.

What would be sensible is in my opinion

  1. Increase the base price of all the hash opcodes 10x (at least) at network outset so users trying to flood many hash executions reach the BLOCK_SPEND_LIMIT much faster before they can slow down block times. Also it would discourage people attempting the attack (as well as just excessive hash usage from poor finalize scope design) because of how expensive it is. I'm convinced this is a measure that should be taken.

  2. A TRANSACTION_SPEND_LIMIT or MAX_BYTES_HASHED per executions, would implicitly limit single programs from using a lot of hashes, which would be helpful, but it wouldn't limit the flood case.

  3. Potentially putting a hash limit per block or limit the # of total bytes hashed per block. This is the weirder option, but it would definitively stop validators from trying to process an amount of hash function usage that slows the network.

@d0cd
Copy link
Contributor

d0cd commented Mar 7, 2024

On #2, does the BLOCK_SPEND_LIMIT address the flood case?
The total spent in a block is the sum of the cost of the accepted and rejected executions.
If an execution's cost results in the running total exceeded BLOCK_SPEND_LIMIT then it's aborted.

@iamalwaysuncomfortable
Copy link
Contributor

iamalwaysuncomfortable commented Mar 7, 2024

@d0cd the conclusion I've come to is that a sufficiently low BLOCK_SPEND_LIMIT somewhere around 750 as you've suggeested and an increase of them current hash opcode prices 20x should be sufficient to prevent DoS attacks via Hash opcodes (and also has the benefit of being the simplest option).

I'd recommend we go with this route.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants