Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Cluster mempool implementation #28676

Draft
wants to merge 71 commits into
base: master
Choose a base branch
from

Conversation

sdaftuar
Copy link
Member

@sdaftuar sdaftuar commented Oct 18, 2023

This is a draft implementation of the cluster mempool design described in #27677. I'm opening this as a draft PR now to share the branch I'm working on with others, so that we can start to think about in-progress projects (like package relay, package validation, and package rbf) in the context of this design. Also, I can use some help from others for parts of this work, including the interaction between the mempool and the wallet, and also reworking some of our existing test cases to fit a cluster-mempool world.

Note that the design of this implementation is subject to change as I continue to iterate on the code (to make the code more hygienic and robust, in particular). At this point though I think the performance is pretty reasonable and I'm not currently aware of any bugs. There are some microbenchmarks added here, and some improved fuzz tests; it would be great if others ran both of those on their own hardware as well and reported back on any findings.

This branch implements the following observable behavior changes:

  • Maintains a partitioning of the mempool into connected clusters
  • Each cluster is sorted ("linearized") either using an optimal sort, or an ancestor-feerate-based one, depending on the size of the cluster (thanks to @sipa for this logic)
  • Transaction selection for mining is updated to use the cluster linearizations
  • Mempool eviction is updated to use the cluster linearizations
  • The RBF rules are updated to drop the requirement that no new inputs are introduced, and to change the feerate requirement to instead check that the mining score of a replacement transaction exceed the mining score of the conflicted transactions
  • The CPFP carveout rule is eliminated (it doesn't make sense in a cluster-limited mempool)
  • The ancestor and descendant limits are no longer enforced.
  • New cluster count/cluster vsize limits are now enforced instead.

Some less observable behavior changes:

  • The cached ancestor and descendant data are dropped from the mempool, along with the multi_index indices that were maintained to sort the mempool by ancestor and descendant feerates. For compatibility (eg with wallet behavior or RPCs exposing this), this information is now calculated dynamically instead.
  • The ancestor and descendant walking algorithms are now implemented using epochs (resulting in a significant performance improvement, according to the benchmarks I've looked at)

Still to do:

  • More comparisons between this branch and master on historical data to compare validation speed (accepting loose transactions, processing RBF transactions, validating a block/postprocessing, updating the mempool for a reorg).
  • More historical data analysis to try to evaluate the likely impact of setting the cluster size limits to varying values (to motivate what values we should ultimately pick). [DONE, see this post]
  • Updating wallet code to be cluster-aware (including mini_miner and coin selection)
  • Rework many of our functional tests to be cluster-aware
  • Figure out what package validation and package RBF rules should be in this design
  • Rework the partially_downloaded_block fuzz target to not add duplicate transactions to the mempool (fuzz: don't allow adding duplicate transactions to the mempool #29990).
  • Update RBF logic to ensure that replacements always strictly improve the mempool.
  • Figure out how we want to document our RBF policy (preserve historical references to BIP 125 or previous Bitcoin Core behaviors vs clean slate documentation?)

For discussion/feedback:

  • How significant is it to be dropping the CPFP carveout rule? Does that affect how we will ultimately want to stage new mempool deployment?
  • How well do the proposed RBF rules meet everyone's use cases?
  • What design improvements can we make to the cluster tracking implementation?
  • The ZMQ callbacks that occur when a block is found will happen in a slightly different order, because we now will fully remove all transactions occurring in a block from the mempool before removing any conflicts. Is this a problem?

@DrahtBot
Copy link
Contributor

DrahtBot commented Oct 18, 2023

The following sections might be updated with supplementary metadata relevant to reviewers and maintainers.

Code Coverage

For detailed information about the code coverage, see the test coverage report.

Reviews

See the guideline for information on the review process.
A summary of reviews will appear here.

Conflicts

Reviewers, this pull request conflicts with the following ones:

  • #29998 (functional test: ensure confirmed utxo being sourced for 2nd chain by instagibbs)
  • #29986 (test: Don't rely on incentive incompatible replacement in mempool_accept_v3.py by sdaftuar)
  • #29965 (Lint: support running individual rust linters and improve subtree exclusion by davidgumberg)
  • #29954 (RPC: Return permitbaremultisig and maxdatacarriersize in getmempoolinfo by kristapsk)
  • #29948 (test: add missing comparison of node1's mempool in MempoolPackagesTest by Umiiii)
  • #29906 (Disable util::Result copying and assignment by ryanofsky)
  • #29873 (policy: restrict all TRUC (v3) transactions to 25KvB by glozow)
  • #29700 (kernel, refactor: return error status on all fatal errors by ryanofsky)
  • #29680 (wallet: fix unrelated parent conflict doesn't cause child tx to be marked as conflict by Eunovo)
  • #29641 (scripted-diff: Use LogInfo/LogDebug over LogPrintf/LogPrint by maflcko)
  • #29625 (Several randomness improvements by sipa)
  • #29543 (refactor: Avoid unsigned integer overflow in script/interpreter.cpp by hebasto)
  • #29496 (policy: bump TX_MAX_STANDARD_VERSION to 3 by glozow)
  • #29325 (consensus: Store transaction nVersion as uint32_t by achow101)
  • #29252 (kernel: Remove key module from kernel library by TheCharlatan)
  • #29231 (logging: Update to new logging API by ajtowns)
  • #28984 (Cluster size 2 package rbf by instagibbs)
  • #28843 ([refactor] Remove BlockAssembler m_mempool member by TheCharlatan)
  • #28830 ([refactor] Check CTxMemPool options in ctor by TheCharlatan)
  • #28687 (C++20 std::views::reverse by stickies-v)
  • #28121 (include verbose "debug-message" field in testmempoolaccept response by pinheadmz)
  • #26593 (tracing: Only prepare tracepoint arguments when actually tracing by 0xB10C)
  • #26022 (Add util::ResultPtr class by ryanofsky)
  • #25722 (refactor: Use util::Result class for wallet loading by ryanofsky)
  • #25665 (refactor: Add util::Result failure values, multiple error and warning messages by ryanofsky)

If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first.

@glozow glozow added the Mempool label Oct 20, 2023
doc/policy/mempool-replacements.md Show resolved Hide resolved
src/rpc/mempool.cpp Outdated Show resolved Hide resolved
for (const CTxIn& txin : mi->GetTx().vin) {
parents_of_conflicts.insert(txin.prevout.hash);
// Exit early if we're going to fail (see below)
if (all_conflicts.size() > MAX_REPLACEMENT_CANDIDATES) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: this is sticking with the same rule#5 instead of number of effected clusters. It would be more ideal if it were number of clusters to allow for better usage of adversarial-ish batched CPFPs

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is room to relax this rule some, so if this is important we can do so. I think the requirement is a bound on the number of clusters that would have to be re-sorted in order to accept the new transaction. We can approximate that as the number of clusters that would be non-empty as a result of removing all the conflicting transactions from the mempool, and only process replacements for which that is below some target.

That would be a more complex logic though, so before implementing it I wanted to have some sense of whether we need to. Has the historical 100-transaction-conflict limit been problematic for use cases in the past? Note also that in the new code, we are calculating the number of conflicts exactly (the old code used an approximation, which could be gamed by an adversary).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah! I wrote a huge response to this, then looked up our previous discussions, and realized I didn't actually read the code: #27677 (comment)

IIUC now, this is only counting direct conflicts, and not the descendants that are booted.

I think that's fine.

Actually no, the existing code comments were just misleading, looks like the issue still exists, see: #27677 (comment)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah yes, in the new code I'm only counting direct conflicts right now, because every descendant of a direct conflict must be in the same cluster as that conflict. So this is already a relaxation of the existing rule.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the requirement is a bound on the number of clusters that would have to be re-sorted in order to accept the new transaction.

As an alternative, we drop the replacement limit to like, 10 or something, and then only count the direct conflicts, not the direct conflicts and all the descendants?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe I (finally) actually fixed this behavior to count the number of direct conflicts.

}

ws.m_ancestors = *ancestors;
// Calculate in-mempool ancestors
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the check two conditionals above:

if (!bypass_limits && ws.m_modified_fees < m_pool.m_min_relay_feerate.GetFee(ws.m_vsize))

This is still needed for the same reason as before: transaction that is above minrelay, but would be in a chunk below minrelay. We could immediately evict below minrelay chunks post-re-linearization f.e. which would allow 0-fee parents then relax this maybe.

CTxMemPool& pool = *testing_setup.get()->m_node.mempool;

std::vector<CTransactionRef> transactions;
// Create 1000 clusters of 100 transactions each
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

numbers in comments are off

shower thought: Should we/can we bound the number of clusters in addition to total memory in TrimToSize? I can't think of a good way to do that that doesn't complicate things quite a bit, and perhaps practical mempool sizes make this moot. Just something to consider in case I missed something obvious.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The immediate downside to a cap on number of clusters is that singleton, high-feerate transactions would not be accepted. And I don't think we need to -- the only places where having more clusters makes us slower is in eviction and mining, and for both of those use cases we could improve performance (if we need to) by maintaining the relevant heap data structures (or something equivalent) as chunks are modified, rather than all at once.

For now in this branch I've created these from scratch each time, but if it turns out that performance is meaningfully impacted when the mempool is busy, then I can optimize this further by just using a bit more memory.

src/txmempool.cpp Outdated Show resolved Hide resolved
return a.first->fee*b.first->size < b.first->fee*a.first->size;
};
// TODO: replace the heap with a priority queue
std::make_heap(heap_chunks.begin(), heap_chunks.end(), cmp);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't ask why but I'm getting significant performance improvement(>10%) just push_heaping everything from scratch, and similarly with priority_queue

@Sjors
Copy link
Member

Sjors commented Nov 14, 2023

It would be useful to add a mempool_backwards_compatibility.py test to illustrate how the new rules interact with older nodes. It could have two modern nodes and one v25 (or v26) node. Some of the tests you deleted in this branch could be moved there. E.g. the test could demonstrate how RBF rule 2 is not enforced when relaying to the new node, but it is when relaying to the v25 node.

Benchmarks on a 2019 MacBook Pro (2,3 GHz 8-Core Intel Core i9), plugged in:

% src/bench/bench_bitcoin -filter=.*Mem.* -min-time=10000

|      330,557,188.67 |                3.03 |    1.5% |     10.77 | `ComplexMemPool`
|      451,529,273.50 |                2.21 |    2.8% |     10.01 | `MemPoolAddTransactions`
|            2,847.13 |          351,231.06 |    2.7% |     10.93 | `MemPoolAncestorsDescendants`
|           11,047.90 |           90,514.97 |    2.5% |     10.69 | `MemPoolMiningScoreCheck`
|        4,328,796.04 |              231.01 |    1.1% |     10.99 | `MempoolCheck`
|           36,268.80 |           27,571.91 |    2.9% |     11.17 | `MempoolEviction`
|        9,123,684.25 |              109.60 |    1.4% |     10.74 | `RpcMempool`

Update: added bench for master@c2d4e40e454ba0c7c836a849b6d15db4850079f2:

|               ns/op |                op/s |    err% |     total | benchmark
|--------------------:|--------------------:|--------:|----------:|:----------
|      302,677,055.25 |                3.30 |    3.5% |     10.76 | `ComplexMemPool`
|      100,167,478.00 |                9.98 |    2.5% |     11.08 | `MempoolCheck`
|           43,759.84 |           22,852.00 |    4.1% |     11.42 | `MempoolEviction`
|       10,235,913.25 |               97.70 |    3.5% |     10.66 | `RpcMempool`

}

BENCHMARK(MemPoolAncestorsDescendants, benchmark::PriorityLevel::HIGH);
BENCHMARK(MemPoolAddTransactions, benchmark::PriorityLevel::HIGH);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dd6684a: While you're touching this, can you rename MempoolCheck to MemPoolCheck, MempoolEviction to MemPoolEviction and ComplexMemPool to MempoolComplex? That makes -filter=MemPool.* work

As a workaround, -filter=.*Mem.* does work.

@@ -574,6 +574,8 @@ void SetupServerArgs(ArgsManager& argsman)
argsman.AddArg("-limitancestorsize=<n>", strprintf("Do not accept transactions whose size with all in-mempool ancestors exceeds <n> kilobytes (default: %u)", DEFAULT_ANCESTOR_SIZE_LIMIT_KVB), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-limitdescendantcount=<n>", strprintf("Do not accept transactions if any ancestor would have <n> or more in-mempool descendants (default: %u)", DEFAULT_DESCENDANT_LIMIT), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-limitdescendantsize=<n>", strprintf("Do not accept transactions if any ancestor would have more than <n> kilobytes of in-mempool descendants (default: %u).", DEFAULT_DESCENDANT_SIZE_LIMIT_KVB), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
argsman.AddArg("-limitclustercount=<n>", strprintf("Do not accept transactions connected to <n> or more existing in-mempool transactions (default: %u)", DEFAULT_CLUSTER_LIMIT), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST);
Copy link
Member

@Sjors Sjors Nov 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made an attempt at dropping -limitdescendantsize and friends: https://github.com/Sjors/bitcoin/commits/2022/11/cluster-mempool

I (naively) replaced ancestor and descendent limits in coin selection with the new cluster limit. At least the tests pass *.

When we drop these settings anyone who uses them will get an error when starting the node. That's probably a good thing, since they should read about this change.

* = well, wallet_basic.py fails with:

Internal bug detected: Shared UTXOs among selection results
wallet/coinselection.h:340 (InsertInputs)

Copy link
Member

@Sjors Sjors left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couple more comments / questions. To be continued...

{
m_chunks[entry.m_loc.first].txs.erase(entry.m_loc.second);

// Chunk (or cluster) may now be empty, but this will get cleaned up
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

54f39ca: what if the deleted transaction makes it so there are now two clusters? This is also safe to ignore?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't say it's safe to ignore, but the idea is that we often want to be able to batch deletions, and then clean things up in one pass. So the call sites should all be dealing with this issue and ensuring that we always clean up at some point.

(This is definitely an area where I expect that we'll be re-engineering all this logic and trying to come up with a better abstraction layer so that this is more robust and easier to think about!)


for (auto txentry : txs) {
m_chunks.emplace_back(txentry.get().GetModifiedFee(), txentry.get().GetTxSize());
m_chunks.back().txs.emplace_back(txentry);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

54f39ca: So you're creating a chunk for each new transaction and then erasing it if the fee rate goes down. Why not the other way around?

src/txmempool.cpp Outdated Show resolved Hide resolved
sdaftuar and others added 29 commits May 8, 2024 16:07
The only place we still use the older interface is in policy/rbf.cpp, where
it's helpful to incrementally calculate descendants to avoid calculating too
many at once (or cluttering the CalculateDescendants interface with a
calculation limit).
TO DO: Rewrite unit tests for PV3C to not lie about mempool parents, so that we
can push down the parent calculation into v3_policy from validation.
Add benchmarks for:

  - mempool update time when blocks are found
  - adding a transaction
  - performing the mempool's RBF calculation
  - calculating mempool ancestors/descendants
Including test coverage for mempool eviction and expiry
This is in preparation for eliminating the block template building happening in
mini_miner, in favor of directly using the linearizations done in the mempool.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants