Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Introduce TRANSACTION_SPEND_LIMIT. #2368

Merged
merged 8 commits into from Mar 2, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 2 additions & 0 deletions console/network/src/lib.rs
Expand Up @@ -104,6 +104,8 @@ pub trait Network:
const MAX_DEPLOYMENT_LIMIT: u64 = 1 << 20; // 1,048,576 constraints
/// The maximum number of microcredits that can be spent as a fee.
const MAX_FEE: u64 = 1_000_000_000_000_000;
/// The maximum number of microcredits that can be spent on a finalize block.
const TRANSACTION_SPEND_LIMIT: u64 = 100_000_000;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@iamalwaysuncomfortable likely has better intuition on how to back into this number with more data.

Even a high level understanding of how many hashes, standard operations, etc. it takes to hit this limit will be valuable for developers. Also would be nice to know that we aren't setting a number too conservatively that blocks program expressivity and usage.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This definitely seems a little overly conservative. Thus far about 200k credit spends (which is 3 orders of magnitude higher than this) seem to be where the network has issues, but it deserves further testing with other opcodes to see where this lies.

Let me run some extra tests that try 1k & 10k and 100k spends respectively to see if we can get solid measure of where the boundary of performance issues lies.

Copy link
Contributor

@vicsn vicsn Feb 22, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Janky analysis incoming.

I think the two main considerations for a limitation on finalize are:

1. prevent preposterous blocktimes.

The performance implications of large finalize blocks are simpler than deployments: they run serially and single threaded and only in speculate.

  • A heavy hash program costing 1_330 credits containing 2_000 triple nested array BHP hashes will take 70 seconds.
  • A big program costing 30_000 credits containing only simple opcodes will take 150 seconds.

So if the entire expected public monthly allocation of 35_000 is spent, we get a block of up to 30 minutes and afterwards we're cool. This doesn't sound too bad.

To be conservative, a maximum runtime of 7.5 seconds for a finalize block limits how much a single actor can cause a big blocktime spike. Which would bring us to a rounded limit of around 100 credits (or 100M microcredits).

2. do not hurt expressivity too much.

Expressivity of on-chain programs is already extremely limited because of the deployment limit. So we can easily decide to lower it to:

  • 10M microcredits: 15 times hash.BHP [[[u8; 16u32]; 16u32]; 8u32] or 5000 simple opcodes
  • 1M microcredits: 1.5 times hash.BHP [[[u8; 16u32]; 16u32]; 8u32] or 500 simple opcodes

I vote for 10M microcredits.

Copy link
Collaborator

@howardwu howardwu Mar 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The intention of the finalize block is not to have Ethereum style program execution, but rather to allow for applications that require some degree of peer-to-peer shared resources to be able to settle in limited fashion.

The purpose of this limit is to serve as a forcing function to move as much computation that is possible off-chain so that only requisite state is passed through to the finalize scope on-chain.

In terms of my response to the analysis above:

1. prevent preposterous block times

we get a block of up to 30 minutes and afterwards we're cool. This doesn't sound too bad.

That sounds incredibly bad. Being able to repeatedly execute a program that can slow down block times to 30 minutes is a bug, not a feature. One of the purposes of the current limit is to prevent this exact case.

2. do not hurt expressivity too much

I am fine with 10M microcredits, but I think 100M microcredits in its current form is adequate.

As a side remark: this is a good example for the importance of setting limits now rather than later. It will be much hard to introduce limits that are well-intentioned after the fact.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A heavy hash program costing 1_330 credits containing 2_000 triple nested array BHP hashes will take 70 seconds.
A big program costing 30_000 credits containing only simple opcodes will take 150 seconds.

@vicsn We need to update snarkVM pricing for these cases. Any execution that takes more than 1 second needs to be priced in accordingly to prevent this exact case.


/// The anchor height, defined as the expected number of blocks to reach the coinbase target.
const ANCHOR_HEIGHT: u32 = Self::ANCHOR_TIME as u32 / Self::BLOCK_TIME as u32;
Expand Down
78 changes: 77 additions & 1 deletion ledger/src/tests.rs
Expand Up @@ -28,7 +28,7 @@ use indexmap::IndexMap;
use ledger_block::{ConfirmedTransaction, Rejected, Transaction};
use ledger_committee::{Committee, MIN_VALIDATOR_STAKE};
use ledger_store::{helpers::memory::ConsensusMemory, ConsensusStore};
use synthesizer::{program::Program, vm::VM};
use synthesizer::{prelude::cost_in_microcredits, program::Program, vm::VM, Stack};

#[test]
fn test_load() {
Expand Down Expand Up @@ -1497,3 +1497,79 @@ fn test_max_committee_limit_with_bonds() {
let committee = ledger.latest_committee().unwrap();
assert!(!committee.is_committee_member(second_address));
}

#[test]
fn test_deployment_exceeding_max_transaction_spend() {
let rng = &mut TestRng::default();

// Initialize the test environment.
let crate::test_helpers::TestEnv { ledger, private_key, .. } = crate::test_helpers::sample_test_env(rng);

// Construct two programs, one that is allowed and one that exceeds the maximum transaction spend.
let mut allowed_program = None;
let mut exceeding_program = None;

for i in 0..<CurrentNetwork as Network>::MAX_COMMANDS.ilog2() {
// Construct the finalize body.
let finalize_body =
(0..2.pow(i)).map(|i| format!("hash.bhp256 0field into r{i} as field;")).collect::<Vec<_>>().join("\n");

// Construct the program.
let program = Program::from_str(&format!(
r"program test_max_spend_limit_{i}.aleo;
function foo:
async foo into r0;
output r0 as test_max_spend_limit_{i}.aleo/foo.future;

finalize foo:{finalize_body}",
))
.unwrap();

// Initialize a stack for the program.
let stack = Stack::<CurrentNetwork>::new(&ledger.vm().process().read(), &program).unwrap();

// Check the finalize cost.
let finalize_cost = cost_in_microcredits(&stack, &Identifier::from_str("foo").unwrap()).unwrap();

// If the finalize cost exceeds the maximum transaction spend, assign the program to the exceeding program and break.
// Otherwise, assign the program to the allowed program and continue.
if finalize_cost > <CurrentNetwork as Network>::TRANSACTION_SPEND_LIMIT {
exceeding_program = Some(program);
break;
} else {
allowed_program = Some(program);
}
}

// Ensure that the allowed and exceeding programs are not None.
assert!(allowed_program.is_some());
assert!(exceeding_program.is_some());

let allowed_program = allowed_program.unwrap();
let exceeding_program = exceeding_program.unwrap();

// Deploy the allowed program.
let deployment = ledger.vm().deploy(&private_key, &allowed_program, None, 0, None, rng).unwrap();

// Verify the deployment transaction.
assert!(ledger.vm().check_transaction(&deployment, None, rng).is_ok());

// Construct the next block.
let block =
ledger.prepare_advance_to_next_beacon_block(&private_key, vec![], vec![], vec![deployment], rng).unwrap();

// Check that the next block is valid.
ledger.check_next_block(&block, rng).unwrap();

// Add the block to the ledger.
ledger.advance_to_next_block(&block).unwrap();

// Check that the program exists in the VM.
assert!(ledger.vm().contains_program(allowed_program.id()));

// Deploy the exceeding program.
let deployment = ledger.vm().deploy(&private_key, &exceeding_program, None, 0, None, rng).unwrap();

// Verify the deployment transaction.
assert!(ledger.vm().check_transaction(&deployment, None, rng).is_err());
}
Expand Up @@ -12,16 +12,13 @@
// See the License for the specific language governing permissions and
// limitations under the License.

use crate::{
prelude::{Stack, StackProgramTypes},
VM,
};
use crate::{Process, Stack, StackProgramTypes};

use console::{
prelude::*,
program::{FinalizeType, Identifier, LiteralType, PlaintextType},
};
use ledger_block::{Deployment, Execution};
use ledger_store::ConsensusStorage;
use synthesizer_program::{CastType, Command, Finalize, Instruction, Operand, StackProgram};

/// Returns the *minimum* cost in microcredits to publish the given deployment (total cost, (storage cost, synthesis cost, namespace cost)).
Expand Down Expand Up @@ -59,10 +56,7 @@ pub fn deployment_cost<N: Network>(deployment: &Deployment<N>) -> Result<(u64, (
}

/// Returns the *minimum* cost in microcredits to publish the given execution (total cost, (storage cost, finalize cost)).
pub fn execution_cost<N: Network, C: ConsensusStorage<N>>(
vm: &VM<N, C>,
execution: &Execution<N>,
) -> Result<(u64, (u64, u64))> {
pub fn execution_cost<N: Network>(process: &Process<N>, execution: &Execution<N>) -> Result<(u64, (u64, u64))> {
// Compute the storage cost in microcredits.
let storage_cost = execution.size_in_bytes()?;

Expand All @@ -73,7 +67,7 @@ pub fn execution_cost<N: Network, C: ConsensusStorage<N>>(
// Retrieve the program ID and function name.
let (program_id, function_name) = (transition.program_id(), transition.function_name());
// Retrieve the finalize cost.
let cost = cost_in_microcredits(vm.process().read().get_stack(program_id)?, function_name)?;
let cost = cost_in_microcredits(process.get_stack(program_id)?, function_name)?;
// Accumulate the finalize cost.
if cost > 0 {
finalize_cost = finalize_cost
Expand Down
3 changes: 3 additions & 0 deletions synthesizer/process/src/lib.rs
Expand Up @@ -18,6 +18,9 @@
// TODO (howardwu): Update the return type on `execute` after stabilizing the interface.
#![allow(clippy::type_complexity)]

mod cost;
pub use cost::*;

mod stack;
pub use stack::*;

Expand Down
10 changes: 10 additions & 0 deletions synthesizer/process/src/verify_deployment.rs
Expand Up @@ -33,6 +33,16 @@ impl<N: Network> Process<N> {
let stack = Stack::new(self, deployment.program())?;
lap!(timer, "Compute the stack");

// Ensure that each finalize block does not exceed the `TRANSACTION_SPEND_LIMIT`.
for (function_name, _) in deployment.program().functions() {
let finalize_cost = cost_in_microcredits(&stack, function_name)?;
ensure!(
finalize_cost <= N::TRANSACTION_SPEND_LIMIT,
"Finalize block '{function_name}' has a cost '{finalize_cost}' which exceeds the transaction spend limit '{}'",
N::TRANSACTION_SPEND_LIMIT
);
}

// Ensure the verifying keys are well-formed and the certificates are valid.
let verification = stack.verify_deployment::<A, R>(deployment, rng);
lap!(timer, "Verify the deployment");
Expand Down
2 changes: 1 addition & 1 deletion synthesizer/src/vm/execute.rs
Expand Up @@ -45,7 +45,7 @@ impl<N: Network, C: ConsensusStorage<N>> VM<N, C> {
let fee = match is_fee_required || is_priority_fee_declared {
true => {
// Compute the minimum execution cost.
let (minimum_execution_cost, (_, _)) = execution_cost(self, &execution)?;
let (minimum_execution_cost, (_, _)) = execution_cost(&self.process().read(), &execution)?;
// Compute the execution ID.
let execution_id = execution.to_execution_id()?;
// Authorize the fee.
Expand Down
3 changes: 0 additions & 3 deletions synthesizer/src/vm/helpers/mod.rs
Expand Up @@ -15,9 +15,6 @@
pub(crate) mod committee;
pub use committee::*;

mod cost;
pub use cost::*;

mod macros;

mod rewards;
Expand Down
2 changes: 1 addition & 1 deletion synthesizer/src/vm/mod.rs
Expand Up @@ -55,7 +55,7 @@ use ledger_store::{
TransactionStore,
TransitionStore,
};
use synthesizer_process::{Authorization, Process, Trace};
use synthesizer_process::{deployment_cost, execution_cost, Authorization, Process, Trace};
use synthesizer_program::{FinalizeGlobalState, FinalizeOperation, FinalizeStoreTrait, Program};

use aleo_std::prelude::{finish, lap, timer};
Expand Down
2 changes: 1 addition & 1 deletion synthesizer/src/vm/verify.rs
Expand Up @@ -183,7 +183,7 @@ impl<N: Network, C: ConsensusStorage<N>> VM<N, C> {
// If the fee is required, then check that the base fee amount is satisfied.
if is_fee_required {
// Compute the execution cost.
let (cost, _) = execution_cost(self, execution)?;
let (cost, _) = execution_cost(&self.process().read(), execution)?;
// Ensure the fee is sufficient to cover the cost.
if *fee.base_amount()? < cost {
bail!(
Expand Down