New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Cluster mempool implementation #28676
base: master
Are you sure you want to change the base?
Conversation
The following sections might be updated with supplementary metadata relevant to reviewers and maintainers. Code CoverageFor detailed information about the code coverage, see the test coverage report. ReviewsSee the guideline for information on the review process. ConflictsReviewers, this pull request conflicts with the following ones:
If you consider this pull request important, please also help to review the conflicting pull requests. Ideally, start with the one that should be merged first. |
9420af6
to
a254fba
Compare
190ada4
to
b4f69c3
Compare
033e7fe
to
29f428f
Compare
src/policy/rbf.cpp
Outdated
for (const CTxIn& txin : mi->GetTx().vin) { | ||
parents_of_conflicts.insert(txin.prevout.hash); | ||
// Exit early if we're going to fail (see below) | ||
if (all_conflicts.size() > MAX_REPLACEMENT_CANDIDATES) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: this is sticking with the same rule#5 instead of number of effected clusters. It would be more ideal if it were number of clusters to allow for better usage of adversarial-ish batched CPFPs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is room to relax this rule some, so if this is important we can do so. I think the requirement is a bound on the number of clusters that would have to be re-sorted in order to accept the new transaction. We can approximate that as the number of clusters that would be non-empty as a result of removing all the conflicting transactions from the mempool, and only process replacements for which that is below some target.
That would be a more complex logic though, so before implementing it I wanted to have some sense of whether we need to. Has the historical 100-transaction-conflict limit been problematic for use cases in the past? Note also that in the new code, we are calculating the number of conflicts exactly (the old code used an approximation, which could be gamed by an adversary).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah! I wrote a huge response to this, then looked up our previous discussions, and realized I didn't actually read the code: #27677 (comment)
IIUC now, this is only counting direct conflicts, and not the descendants that are booted.
I think that's fine.
Actually no, the existing code comments were just misleading, looks like the issue still exists, see: #27677 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah yes, in the new code I'm only counting direct conflicts right now, because every descendant of a direct conflict must be in the same cluster as that conflict. So this is already a relaxation of the existing rule.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the requirement is a bound on the number of clusters that would have to be re-sorted in order to accept the new transaction.
As an alternative, we drop the replacement limit to like, 10 or something, and then only count the direct conflicts, not the direct conflicts and all the descendants?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe I (finally) actually fixed this behavior to count the number of direct conflicts.
src/validation.cpp
Outdated
} | ||
|
||
ws.m_ancestors = *ancestors; | ||
// Calculate in-mempool ancestors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the check two conditionals above:
if (!bypass_limits && ws.m_modified_fees < m_pool.m_min_relay_feerate.GetFee(ws.m_vsize))
This is still needed for the same reason as before: transaction that is above minrelay, but would be in a chunk below minrelay. We could immediately evict below minrelay chunks post-re-linearization f.e. which would allow 0-fee parents then relax this maybe.
CTxMemPool& pool = *testing_setup.get()->m_node.mempool; | ||
|
||
std::vector<CTransactionRef> transactions; | ||
// Create 1000 clusters of 100 transactions each |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
numbers in comments are off
shower thought: Should we/can we bound the number of clusters in addition to total memory in TrimToSize? I can't think of a good way to do that that doesn't complicate things quite a bit, and perhaps practical mempool sizes make this moot. Just something to consider in case I missed something obvious.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The immediate downside to a cap on number of clusters is that singleton, high-feerate transactions would not be accepted. And I don't think we need to -- the only places where having more clusters makes us slower is in eviction and mining, and for both of those use cases we could improve performance (if we need to) by maintaining the relevant heap data structures (or something equivalent) as chunks are modified, rather than all at once.
For now in this branch I've created these from scratch each time, but if it turns out that performance is meaningfully impacted when the mempool is busy, then I can optimize this further by just using a bit more memory.
src/node/miner.cpp
Outdated
return a.first->fee*b.first->size < b.first->fee*a.first->size; | ||
}; | ||
// TODO: replace the heap with a priority queue | ||
std::make_heap(heap_chunks.begin(), heap_chunks.end(), cmp); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't ask why but I'm getting significant performance improvement(>10%) just push_heap
ing everything from scratch, and similarly with priority_queue
It would be useful to add a Benchmarks on a 2019 MacBook Pro (2,3 GHz 8-Core Intel Core i9), plugged in:
Update: added bench for master@c2d4e40e454ba0c7c836a849b6d15db4850079f2:
|
} | ||
|
||
BENCHMARK(MemPoolAncestorsDescendants, benchmark::PriorityLevel::HIGH); | ||
BENCHMARK(MemPoolAddTransactions, benchmark::PriorityLevel::HIGH); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dd6684a: While you're touching this, can you rename MempoolCheck
to MemPoolCheck
, MempoolEviction
to MemPoolEviction
and ComplexMemPool
to MempoolComplex
? That makes -filter=MemPool.*
work
As a workaround, -filter=.*Mem.*
does work.
@@ -574,6 +574,8 @@ void SetupServerArgs(ArgsManager& argsman) | |||
argsman.AddArg("-limitancestorsize=<n>", strprintf("Do not accept transactions whose size with all in-mempool ancestors exceeds <n> kilobytes (default: %u)", DEFAULT_ANCESTOR_SIZE_LIMIT_KVB), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST); | |||
argsman.AddArg("-limitdescendantcount=<n>", strprintf("Do not accept transactions if any ancestor would have <n> or more in-mempool descendants (default: %u)", DEFAULT_DESCENDANT_LIMIT), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST); | |||
argsman.AddArg("-limitdescendantsize=<n>", strprintf("Do not accept transactions if any ancestor would have more than <n> kilobytes of in-mempool descendants (default: %u).", DEFAULT_DESCENDANT_SIZE_LIMIT_KVB), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST); | |||
argsman.AddArg("-limitclustercount=<n>", strprintf("Do not accept transactions connected to <n> or more existing in-mempool transactions (default: %u)", DEFAULT_CLUSTER_LIMIT), ArgsManager::ALLOW_ANY | ArgsManager::DEBUG_ONLY, OptionsCategory::DEBUG_TEST); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made an attempt at dropping -limitdescendantsize
and friends: https://github.com/Sjors/bitcoin/commits/2022/11/cluster-mempool
I (naively) replaced ancestor and descendent limits in coin selection with the new cluster limit. At least the tests pass *.
When we drop these settings anyone who uses them will get an error when starting the node. That's probably a good thing, since they should read about this change.
* =
well, wallet_basic.py fails with:
Internal bug detected: Shared UTXOs among selection results
wallet/coinselection.h:340 (InsertInputs)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple more comments / questions. To be continued...
src/txmempool.cpp
Outdated
{ | ||
m_chunks[entry.m_loc.first].txs.erase(entry.m_loc.second); | ||
|
||
// Chunk (or cluster) may now be empty, but this will get cleaned up |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
54f39ca: what if the deleted transaction makes it so there are now two clusters? This is also safe to ignore?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't say it's safe to ignore, but the idea is that we often want to be able to batch deletions, and then clean things up in one pass. So the call sites should all be dealing with this issue and ensuring that we always clean up at some point.
(This is definitely an area where I expect that we'll be re-engineering all this logic and trying to come up with a better abstraction layer so that this is more robust and easier to think about!)
src/txmempool.cpp
Outdated
|
||
for (auto txentry : txs) { | ||
m_chunks.emplace_back(txentry.get().GetModifiedFee(), txentry.get().GetTxSize()); | ||
m_chunks.back().txs.emplace_back(txentry); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
54f39ca: So you're creating a chunk for each new transaction and then erasing it if the fee rate goes down. Why not the other way around?
The only place we still use the older interface is in policy/rbf.cpp, where it's helpful to incrementally calculate descendants to avoid calculating too many at once (or cluttering the CalculateDescendants interface with a calculation limit).
TO DO: Rewrite unit tests for PV3C to not lie about mempool parents, so that we can push down the parent calculation into v3_policy from validation.
Add benchmarks for: - mempool update time when blocks are found - adding a transaction - performing the mempool's RBF calculation - calculating mempool ancestors/descendants
Including test coverage for mempool eviction and expiry
This is in preparation for eliminating the block template building happening in mini_miner, in favor of directly using the linearizations done in the mempool.
c49e044
to
f3482ed
Compare
This is a draft implementation of the cluster mempool design described in #27677. I'm opening this as a draft PR now to share the branch I'm working on with others, so that we can start to think about in-progress projects (like package relay, package validation, and package rbf) in the context of this design. Also, I can use some help from others for parts of this work, including the interaction between the mempool and the wallet, and also reworking some of our existing test cases to fit a cluster-mempool world.
Note that the design of this implementation is subject to change as I continue to iterate on the code (to make the code more hygienic and robust, in particular). At this point though I think the performance is pretty reasonable and I'm not currently aware of any bugs. There are some microbenchmarks added here, and some improved fuzz tests; it would be great if others ran both of those on their own hardware as well and reported back on any findings.
This branch implements the following observable behavior changes:
Some less observable behavior changes:
epoch
s (resulting in a significant performance improvement, according to the benchmarks I've looked at)Still to do:
partially_downloaded_block
fuzz target to not add duplicate transactions to the mempool (fuzz: don't allow adding duplicate transactions to the mempool #29990).For discussion/feedback: