-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal for ListV2
journals RPC
#353
Comments
What happens when a JournalSpec is removed? How does the client know about it?
The journal mod revision ? That of route assignments ? It's ambiguous. There also isn't a way to represent the full granularity of what revision can mean (is it because an assignment updated it's status?).
Instead of If an RPC is broken, i'd think the client should start a new one with All gazette request types include a header, which is used by brokers to identify a revision they should read-through before attempting to respond.
This is already conveyed in the response
gRPC is fine at sending lots of small messages and making it an array adds complexity to the protocol / potential for a message to be too large. 🤔 I'm not sure there's a way to reasonably do a truly streaming "watch", unless each watch update is essentially a full refresh of the complete journal listing + assignments, sent every time there's any change to the topology because of an etcd revision change. Which would probably be completely fine, because |
Yeah, it seems like We can remove |
🤔 If the server essentially can't support sending incremental watch events, then I wonder if it's even worth building support for watches, either in the short term or at all. My understanding is that the client can set |
Oh, okay, that hadn't clicked for me. That changes my opinion on
That makes sense to me.
It's a fair question. The remaining role of Without this optimization it could be a pretty big thundering herd, especially across data centers, but I'm legit unsure how motivating it is in the short term. It would be a regression as compare to where we are today, though. A next question is what does "change" mean in this context: just the JournalSpec, or assignments as well? Probably we only care about journals, since reactors already discover changes to assignments through use of those listed journals. |
Yeah, and another reason why I'm increasingly hesitant to take this on right now is that I'm concerned that the optimization may not go far enough. In other words, once we're at a point where there's enough cross-data-plane reads for it to matter, will it then seem better to handle watches as a more incremental change stream as opposed to the full-refresh approach? |
@jgraettinger I'm thinking that it makes sense to move forward with |
Yep, SGTM |
In gazette clusters that manage a large number of journals, the List RPC can fail due to exceeding the maximum allowable size of a gRPC response. Rather than trying to add pagination support to the existing List RPC, the proposal is to instead add a ListV2 RPC that returns a streaming repsonse.
The primary motivation for this is simply that pagination is tricky to implement in the server, and/or to use for the client, particularly when it's important for clients to get a consistent view of the journals.
A secondary motivation is that a streaming response can support long-running watches of the journals, which may be an important feature for clients that aren't able to connect to ETCD directly.
This proposal includes support for returning large sets of journals in a streaming response, but does not include support for persistent watches. The idea would be to re-visit the design for watches once we've determined a need.
gRPC protocol
The ListV2 protobuf would look something like this:
The basic idea is that the client sends a single request, and then receives a series of responses that each contain a single journal. The server closes the response stream after sending the final journal, which is the only indication to the client that it now has a complete view of the requested journals.
In other words, there's no message field that indicates "this is the last message".
The text was updated successfully, but these errors were encountered: