You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I don't know if I ever wrote down my ideas w.r.t. reducing the number of allocations and whatnot.
A few things to think about, in general:
We have a sharded map as our memory storage implementation. This supports concurrency by locking each shard independently. We shard the infohash keyspace, by default into 1024 shards, which would support 1024 writes in parallel. Increasing potential parallelism here is easy, I don't think this is a limiting factor, for now.
We have a unified pipeline of middleware to deal with requests and generate responses. This implements the tracker logic, and should run on each request independently, without impacting each other. This would be a potential place to limit concurrency, because every request goes through here. This is also where a lot of the allocations currently are happening, I believe, because we allocate some structs and slices for each request. This should be a good place to implement reusing those.
We have multiple frontends, which work in different ways. They parse requests and serialize responses. As such, they need to translate between whatever format they receive/send on the wire and the request/response formats of the unified middleware chain. We could (and maybe should?) limit concurrency on the frontends. Or, alternatively, I could imagine that the frontends receive available request structs from the middleware, which would implicitly also limit concurrency. I don't know if this is fair (as in, do both frontends get the same number of available request structs? Should they? If one is faster or more efficient, shouldn't it get more?). Alternatively, I could imagine that the frontends each have their own pool of goroutines (which probably makes it easier to reduce data structures in the frontend) and then exchange filled and available request structs with the unified middleware via an MPSC channel. This has the advantage that a frontend which performs better, i.e., generates more requests, also gets more resources from the middleware (because channels are FIFO).
At the moment we do not have bounds on the number of goroutines the frontends can spawn. These goroutines then call into the middleware and storage. This is cheaper than passing things through channels, so it wouldn't be a terrible idea to keep this and bound the number of goroutines in the frontends, like @shyba did in frontend/udp: use a fixed number of coroutines #603. These goroutines should "own" as much of the state they need to handle a request, and reuse that state for each request, including the data structures used by the middleware. This is probably also fair w.r.t. the performance of frontends.
In the end, I see something like pool(s) of goroutines, probably one pool per frontend, which take requests and drive the middleware with them. The middleware probably needs to be change in order to operate on pointers to structs, which stay allocated for the entire lifetime of the tracker. Or something like that...
Anyway. Those are a few ideas of mine. Let's talk about them a bit :)
The text was updated successfully, but these errors were encountered:
I don't know if I ever wrote down my ideas w.r.t. reducing the number of allocations and whatnot.
A few things to think about, in general:
In the end, I see something like pool(s) of goroutines, probably one pool per frontend, which take requests and drive the middleware with them. The middleware probably needs to be change in order to operate on pointers to structs, which stay allocated for the entire lifetime of the tracker. Or something like that...
Anyway. Those are a few ideas of mine. Let's talk about them a bit :)
The text was updated successfully, but these errors were encountered: