You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To keep up with increasing traffic, we should split out a new service from the current backend that replaces the /submit endpoint. This new service should take ownership of JSON deserialization of incoming messages from the nodes, discarding messages that Telemetry does not need, resolving chain multiplexing (this will likely need some two-way communication with the main backend when a new node connects), and then forwarding those messages using a lightweight protocol (Cap'n Proto or Protocol Buffers) to the main telemetry backend.
Unlike the backend, which needs to have a single instance to keep track of all state changes, this new service should be stateless and therefore we should be able to spawn multiple instances of it behind a load balancer. This would solve the two bottlenecks we're currently having: the number of concurrent connections going to the backend, and the CPU use that comes from IO switching and JSON deserialization.
The text was updated successfully, but these errors were encountered:
To keep up with increasing traffic, we should split out a new service from the current backend that replaces the /submit endpoint. This new service should take ownership of JSON deserialization of incoming messages from the nodes, discarding messages that Telemetry does not need, resolving chain multiplexing (this will likely need some two-way communication with the main backend when a new node connects), and then forwarding those messages using a lightweight protocol (Cap'n Proto or Protocol Buffers) to the main telemetry backend.
Unlike the backend, which needs to have a single instance to keep track of all state changes, this new service should be stateless and therefore we should be able to spawn multiple instances of it behind a load balancer. This would solve the two bottlenecks we're currently having: the number of concurrent connections going to the backend, and the CPU use that comes from IO switching and JSON deserialization.
The text was updated successfully, but these errors were encountered: