-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Process unconfirmed_transactions by priority_fee and oldest timestamp #2977
base: testnet3
Are you sure you want to change the base?
Conversation
@@ -65,7 +71,7 @@ pub struct Consensus<N: Network> { | |||
/// The unconfirmed solutions queue. | |||
solutions_queue: Arc<Mutex<IndexMap<PuzzleCommitment<N>, ProverSolution<N>>>>, | |||
/// The unconfirmed transactions queue. | |||
transactions_queue: Arc<Mutex<IndexMap<N::TransactionID, Transaction<N>>>>, | |||
transactions_queue: Arc<Mutex<BTreeMap<TransactionKey<N>, Transaction<N>>>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may not catch duplicate transactions that come in at different times. We may now have multiple instances of the same transaction in the transactions_queue
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, good catch.
Just for shared context: Before adding to the queue, we already check if the tx was seen recently in an LRU cache: self.seen_transactions.lock().put(transaction_id, ()).is_some()
. But seen_transactions
can evict txs before they are popped from the queue, causing duplicate transactions to be added to the queue, passed to workers to verify, wasting compute.
Best I could come up with for now is to add yet another IndexSet
which tracks transaction_id
's in the queue: ac3a95b. Independently, the LRU cache is still useful as a fast check.
Somewhat related:
- we didn't have any limit on the size of
transactions_queue
. Someone could spam e.g. 64_000 unique 1MB deployments and make the validators go OOM. Solutions don't have this issue because they are constant size. I added a limit of ~1 GB: 1704d35 - I think we also need to evict from the cache to let retransmission succeed: ef61a9d
Co-authored-by: Raymond Chu <14917648+raychu86@users.noreply.github.com> Signed-off-by: vicsn <victor.s.nicolaas@protonmail.com>
Motivation
Following EIP-1559 and common decency, this PR ensures we process transactions by priority_fee. If the priority_fee is the same, we order by timestamp (in nanoseconds), and otherwise randomly by transaction_id.
It may be narrowly individually rational for validators to only sort by priority_fee and ignore the timestamp/id, but "It is recommended that transactions with the same priority fee be sorted by time the transaction was received" - https://eips.ethereum.org/EIPS/eip-1559 . This improves the performance and security of the whole network, which also matters for validators.
Test Plan
Separated the queue insertion and popping functions so they can be tested.