Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump async backing parameters #4287

Open
3 tasks
eskimor opened this issue Apr 25, 2024 · 6 comments
Open
3 tasks

Bump async backing parameters #4287

eskimor opened this issue Apr 25, 2024 · 6 comments

Comments

@eskimor
Copy link
Member

eskimor commented Apr 25, 2024

For elastic scaling we will very likely need more relaxed parameters:

max depth 9 and also unincluded segment size for the para to 9 ... was needed in testing for a perfect 2s block time.

  • Check potential spam impact - can we safely go to such high numbers?
  • Enable on Kusama
  • Enable on Polkadot
@sandreim
Copy link
Contributor

Not sure we need 9, this seems to just be a workaround for a potential off by 1 bug, as the len of the segment never goes above 6 and at the same time we can't technically ever reach a segment of that size given block production runs every 2s.

@rphmeier
Copy link
Contributor

There is a trade off between supporting high-depth elastic scaling and dealing with potentially large numbers of candidates.

With fast block building and high demand, we could see chains taking on 10+ cores at a time. Practically speaking they will have a single leader to achieve this, so they'll be far off from the worst case. Supporting that level of throughput could be pretty impactful for Polkadot!

How about a solution like this: we can accept max-depth=n_cores but start ignoring candidates from levels beyond some low number, like 4, as soon as n_pending_candidates > worst_case(4) as a way of detecting parachains that are going haywire with large amounts of potential blocks.

@burdges
Copy link

burdges commented Apr 27, 2024

Is worst_case some historical information?

@rphmeier
Copy link
Contributor

I just mean worst_case(n) as being "the largest possible tree of pending candidates of depth n under the async backing rules"

@bkchr
Copy link
Member

bkchr commented Apr 28, 2024

With fast block building and high demand, we could see chains taking on 10+ cores at a time

I also already said in element that the change proposed here in this issue is a hack. We should instead fix the spam protection to handle this properly. Basically everything should track number of candidates per core and not per parachain. Backing group spam protection can just stay at using max_candidate_depth as the maximum number of candidates per group. In prospective parachains we need to support something like max_candidate_depth * num_cores per parachain as candidates (which doesn't change the number of candidates, as you could also have num_cores different parachains).

@burdges
Copy link

burdges commented Apr 28, 2024

Yes, the relay chain knows which core each backing group works on, and coretime sales should tell the relay chain which parachain can submit to which core, so the relay chain could enforce this limit in backing, and that sounds cleanest.

We've considered other more dynamic models where the relay chain assigns backed candidates to core after backing occurs, which supports variable sized parablocks better. Afaik that's off the table right now anyways, but even if this returns then backing group number could likely replace core there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Backlog
Development

No branches or pull requests

5 participants