Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace ApolloStoreSubscriber didChangeKeys with ApolloStore.Activity… #329

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

jimisaacs
Copy link
Contributor

@jimisaacs jimisaacs commented Apr 8, 2024

The following changes are to support the need to hook up data residing in separate stores into the life-cycle of the data in ApolloStore. This PR represents 1 of 2 ways I considered accomplishing this.

  1. Implement NormalizeCache by wrapping another implementation of NormalizedCache and forwarding calls to it.
  2. A more robust ApolloStoreSubsciber, i.e. this PR

I decided that I was submitting 2 as a PR because I believe the interface is an improvement to what is currently in main (the didChangeKeys subscriber method). The didChangeKeys call follows a call to cache.merge(records:), which is covered in this PR's implementation as the following:

  public func store(_ store: Apollo.ApolloStore,
             activity: Apollo.ApolloStore.Activity,
             contextIdentifier: UUID?) throws {
    if case .did(perform: .merge, outcome: .changedKeys(let changedKeys)) = activity {
      self.store(store, didChangeKeys: changedKeys, contextIdentifier: contextIdentifier)
    }
  }

Example:

final class MyOtherStoreApolloStoreSubscriber {

    private class UnsubscribeRef {
        private let _unsubscribe: () -> Void
        public init(unsubscribe: @escaping () -> Void) {
            _unsubscribe = unsubscribe
        }
        deinit {
            _unsubscribe()
        }
    }
    private lazy var unsubscribeRefs = [String: UnsubscribeRef]()

    private func shouldSubscribeToMyOtherStore(for record: Apollo.Record) -> Bool {
        /* boolean to determine whether or not you should subscribe to my other store for the provided record */
        return unsubscribeRefs[record.key] == nil 
    }

    private func subscribeToMyOtherStore(_ otherStoreListener: () -> Void) -> () -> Void {
        /* stuff to subscribe to my other store in here */
        return { /* stuff to unsubscribe from my other store in here, weak self please! */ }
    }

    private func syncMyOtherStoreWith(store: ApolloStore, records: [Apollo.CacheKey: Apollo.Record]) throws {
        for (key, record) in records {
            if shouldSubscribeToMyOtherStore(for: record) {
                unsubscribeRefs[key] = UnsubscribeRef(unsubscribe: subscribeToMyOtherStore({
                    /* stuff to handle my other store changes in here */
                }))
            }
        }
    }
}

extension MyOtherStoreApolloStoreSubscriber: ApolloStoreSubscriber {

    public func store(_ store: Apollo.ApolloStore, didChangeKeys changedKeys: Set<Apollo.CacheKey>, contextIdentifier: UUID?) {
        /* not implemented, this is now deprecated */
    }

    public func store(_ store: Apollo.ApolloStore, activity: Apollo.ApolloStore.Activity, contextIdentifier: UUID?) throws {
        switch activity {
        case .will(perform: .merge(records: let records)):
            try syncMyOtherStoreWith(store: store, records: records.storage)
        case .did(perform: .loadRecords, outcome: .records(let records)):
            try syncMyOtherStoreWith(store: store, records: records)
        case .did(perform: .removeRecord(for: let key), outcome: .success):
            unsubscribeRefs.removeValue(forKey: key)
        case .did(perform: .removeRecords(matching: let pattern), outcome: .success):
            for key in unsubscribeRefs.keys {
                if key.range(of: pattern, options: .caseInsensitive) != nil {
                    unsubscribeRefs.removeValue(forKey: key)
                }
            }
        case .did(perform: .clear, outcome: .success):
            unsubscribeRefs.removeAll()
        default:
            return
        }
    }
}

Copy link

netlify bot commented Apr 8, 2024

👷 Deploy request for apollo-ios-docc pending review.

Visit the deploys page to approve it

Name Link
🔨 Latest commit c099b78

Copy link

netlify bot commented Apr 8, 2024

👷 Deploy request for eclectic-pie-88a2ba pending review.

Visit the deploys page to approve it

Name Link
🔨 Latest commit c099b78

@jimisaacs jimisaacs force-pushed the dev/jimi/more-robust-apollo-store-subsciber branch from 97710f7 to 4e56f16 Compare April 8, 2024 14:47
@jimisaacs
Copy link
Contributor Author

Looks like testCancellingTaskThroughClientDoesNotCallCompletion might be a little flakey (i.e. is failing for unrelated reasons)

@calvincestari
Copy link
Member

I really like the direction here @jimisaacs, thank you for pushing ahead with improvements to ApolloStoreSubsciber. Have you proved out your poc, is this direction able to satisfy all your needs?

@jimisaacs
Copy link
Contributor Author

jimisaacs commented Apr 10, 2024

Have you proved out your poc, is this direction able to satisfy all your needs?

@calvincestari Yes, I have both approaches, 1 and 2, proved out.

They aren't that different, to be honest, I just figured the ApolloStoreSubscriber was the proper direction in that ApolloStore is the non-abstract object that can be subscribed to and is the thing that participates in business logic. While the cache object is a simple storage implementation specific backend to the store. My approach 1 implements an abstract cache that is basically like the ApolloStore, but only for the reason that the ApolloStoreSubscriber doesn't cover the callback granularity needed. With that reasoning, it would make sense to go with approach 1 if ApolloStore was eventually removed or merged with a composable NormalizeCache implementation. Another reason to go with approach 2 in my mind.

@jimisaacs
Copy link
Contributor Author

jimisaacs commented Apr 23, 2024

@calvincestari what thoughts would you have on piping the RequestContext through to specific actions of loadRecords and merge? I realize that RequestContext was meant to be interceptor-specific, but it could be pretty convenient for store subscribers to receive this context too.

@calvincestari
Copy link
Member

@calvincestari what thoughts would you have on piping the RequestContext through to specific actions of loadRecords and merge? I realize that RequestContext was meant to be interceptor-specific, but it could be pretty convenient for store subscribers to receive this context too.

I think that's blurring the lines a bit too much. You're correct that it is available to the interceptors because they process requests via the interceptor chain. The store subscriber events are only indirectly related to the request context and there are other times the subscriber event wouldn't have any request context; ultimately this doesn't feel like a good change.

@jimisaacs
Copy link
Contributor Author

@calvincestari any word on this PR? I realize I have a workaround locally by implementing our own NormalizedCache, but I'd like to eventually go back to using ApolloStoreSubscriber.

@calvincestari
Copy link
Member

@calvincestari any word on this PR? I realize I have a workaround locally by implementing our own NormalizedCache, but I'd like to eventually go back to using ApolloStoreSubscriber.

My apologies @jimisaacs, I had the assumption that there was still work to be done in this PR but I've just taken a brief look through it now again and I'm not sure why I had that assumption. I'll pick this up first thing tomorrow morning for a final review.

Copy link
Member

@calvincestari calvincestari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some minor comments @jimisaacs, otherwise this looks good to go.

@@ -1,24 +1,62 @@
import Nimble
import XCTest
@testable import Apollo
//@testable import ApolloSQLite
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
//@testable import ApolloSQLite

Doesn't seem like we need this?

let subscriber = NoopSubscriber()

store.subscribe(subscriber)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a check in between subscribe and unsubscribe to ensure that subscribers is not empty at this point would be beneficial to the test.

try super.setUpWithError()
store = ApolloStore()
cache = InMemoryNormalizedCache()
//cache = try! SQLiteNormalizedCache(fileURL: URL(fileURLWithPath: "/tmp/test.sqlite"))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
//cache = try! SQLiteNormalizedCache(fileURL: URL(fileURLWithPath: "/tmp/test.sqlite"))

Cleaning this up too. The underlying cache provider shouldn't make a difference in these tests but arguably a memory-based cache would be slightly faster.

@@ -15,13 +13,66 @@ public protocol ApolloStoreSubscriber: AnyObject {
/// - store: The store which made the changes
/// - changedKeys: The list of changed keys
/// - contextIdentifier: [optional] A unique identifier for the request that kicked off this change, to assist in de-duping cache hits for watchers.
/// @deprecated
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a Swift Macro you've defined somewhere else? We'll need to change this to use the @available attribute instead.

@calvincestari
Copy link
Member

I'd also like to give @BobaFetters and @AnthonyMDev an opportunity for any last review too before we merge.

@BobaFetters
Copy link
Member

LGTM!

Copy link
Contributor

@AnthonyMDev AnthonyMDev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks so much for this contribution @jimisaacs! Overall, this seems like a great addition, but I have a few concerns I want to think through.

I've never seen this design pattern of using a single delegate method with an Activity enum. While at first look that seems great, I actually wonder if it's worse for most users than having different methods for each delegated event. In all existing cases, just didChangeKeys has been fine. I think it's reasonable to want to have delegate notifications for other events, but most people probably won't use all (or even most) of them. Is it better for everyone who implements the delegate to need to switch on all cases and then no-op on the ones they don't want to use? Or is it better to only implement the delegate methods they want to use.

With the proposed implementation, if we want to add new delegate activities in the future, it's actual a breaking change. Anyone who has implemented the delegate exhaustively (switching on the enum cases w/no default case) is going to have compilation errors if we add a new case to the enum. Whereas adding new methods to the delegate protocol with default no-op implementations would not be breaking and is purely additive.

Additionally, with this enum, the delegate method will always be called and the switch statement has to move through to the no-op cases. Whereas with separate functions, I believe the compiler should be able to statically link the no-op default implementation methods and inline them to remove the call completely (assuming the delegate is a struct or final class). This is super minor, but it's still better for performance.

While I like the ergonomics the Enum on it's surface, I think the tradeoffs may favor the other approach. Would love to get your opinions @jimisaacs @calvincestari @BobaFetters

@Iron-Ham, this is going to probably have a merge conflict with #346. It should be trivial to fix, but I'd love your input on the approach we take here as well.

/// Received by subscribers for records matching the provided pattern to be removed from the database.
/// - Parameters:
/// - matching: The pattern for whcih matching records were removed
case removeRecords(matching: Apollo.CacheKey)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why does removeRecord(for: CacheKey) not have the Apollo namespace? I assume its not needed, but we should be consistent and do them both the same way.

@AnthonyMDev
Copy link
Contributor

If we reach consensus on the approach here, I'm happy to make the relevant changes to this PR to wrap it up!

@calvincestari
Copy link
Member

With the proposed implementation, if we want to add new delegate activities in the future, it's actual a breaking change. Anyone who has implemented the delegate exhaustively (switching on the enum cases w/no default case) is going to have compilation errors if we add a new case to the enum. Whereas adding new methods to the delegate protocol with default no-op implementations would not be breaking and is purely additive.

This is a really great callout I hadn't considered. Good catch @AnthonyMDev!

Copy link
Contributor

@Iron-Ham Iron-Ham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Leaving my first-glance feedback on the proposed changes. I like the overall direction and intent of the change, but I have some reservations around the implementation's patterns. I think with some massaging, the API could be a lot clearer to consumers while not presenting as a vector for breaking changes now and in the future.

}

/// The `ApolloStore` class acts as a local cache for normalized GraphQL results.
public class ApolloStore {
public enum Activity: Hashable {
Copy link
Contributor

@Iron-Ham Iron-Ham May 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe that this should be a delegate with default implementations:

  • The delegate pattern is well established in Apple frameworks (and would thus be familiar to consumers of Apollo).
  • The delegate pattern, in this instance, would work well whether an app is using SwiftUI, UIKit, AppKit, or something else entirely.
  • The delegate pattern would not introduce a breaking change (or require implementers to add an @unknown default case).

@@ -88,6 +88,17 @@ public final class GraphQLQueryWatcher<Query: GraphQLQuery>: Cancellable, Apollo
public func store(_ store: ApolloStore,
didChangeKeys changedKeys: Set<CacheKey>,
contextIdentifier: UUID?) {
// not implemented, deprecated
Copy link
Contributor

@Iron-Ham Iron-Ham May 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we move to delegates, I think we might be able to maintain the existing function. I'm concerned this constitutes a breaking change for consumers of Apollo that have implemented their own ApolloStoreSubscriber.

case clear

// Enum to represent the outcome of an action, which can be customized to include more data as needed
public enum Outcome: Hashable {
Copy link
Contributor

@Iron-Ham Iron-Ham May 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit wary of the outputs being an enum vs a return value of a function for similar reasons as wanting to use a delegate for Activity: it constitutes a breaking change if we ever update the enum while obscuring the output at the same time.

For example, if we had a delegate call like so:

func store(_ apolloStore: ApolloStore, didLoadRecords forKeys: Set<CacheKey) -> [CacheKey: Record]

It is much clearer to me what the output is – whereas the alternative is to return an Output, which could be one of three things. As a consumer of the API and implementer of a custom ApolloStoreSubscriber, it implies that the outputs of these functions aren't mutually exclusive: I could be receiving any one of these three – as opposed to a specific one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it is supposed to be the return value of the delegate function, rather, just another parameter. But I agree that this doesn't feel right. Specific Action types should have specific "outcome" values the provide.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha, I must've misunderstood that – I would think those would be an associated type on the given Actions then.

@jimisaacs jimisaacs force-pushed the dev/jimi/more-robust-apollo-store-subsciber branch from b7f04f6 to c099b78 Compare May 7, 2024 00:50
@jimisaacs
Copy link
Contributor Author

@AnthonyMDev @Iron-Ham

I would separate the concerns of the design, from whether it is a breaking change. I personally think the current design is very unclear, as does my Android counterpart while using the Apollo-Kotlin version of this. The didChangeKeys is a confusing API that I question whether it is being used for anything other than the internal watcher implementation. Meaning I understand that it is a breaking change but question how much breaking a change this change has to be as it is.

Now in terms of a new design...

I think performance implications are negligible when compared to invocation of default delegate methods. We are talking about a simple stack with static types.

The enum design just felt more ergonomic given the amount of information it can convey with less method surface area. Meaning the enum represents 10 additional delegate methods. Now I completely understand the point of view of needing to handle the enum for your single case versus implementing the appropriate method. I just felt that the tradeoff was worth it given that something subscribing to a store probably has to at least consider all the cases, which is inherently what an enum is for. If I'm alone there I can move to the additional delegate methods.

If the consensus is to move to 10 additional delegate methods with empty default implementations I'll do it. Though in terms of breaking changes, I do think it's worth considering what is lacking around the didChangeKeys design and would probably still push for depreciating it for a clearer method.

@AnthonyMDev
Copy link
Contributor

I would separate the concerns of the design, from whether it is a breaking change.

Believe me, I would prefer we could do that all the time as well... but every time we have decided to make minor breaking changes to provide a better API surface, we've gotten a lot of negative feedback from the community about it. It's fine to do when you're building for your own internal software, but with distributed open source packages, it becomes a pain point for a lot of people.

The didChangeKeys is a confusing API that I question whether it is being used for anything other than the internal watcher implementation.

Yeah, ApolloStoreSubscriber was an internal protocol that was only being used by the watcher. We made it public because your team requested it. But it was never a fully featured API designed for public consumption I the first place.

I think performance implications are negligible when compared to invocation of default delegate methods. We are talking about a simple stack with static types.

Agreed, performance certainly is not the primary concern here. It was just an additional note. We're more concerned with designing something resilient to breaking changes in the future. I'm also thinking about the design concern brought up in this comment.

I just felt that the tradeoff was worth it given that something subscribing to a store probably has to at least consider all the cases...

I actually don't think that's necessarily true. I think that many users will only care about one or two of these cases and ignore the rest.

If the consensus is to move to 10 additional delegate methods with empty default implementations I'll do it.

I'm not a huge fan of a delegate with 10 methods either, but I think we're prioritizing the trade-offs in the direction of non-breaking changes & clarity vs. ergonomics. I'd love to come up with a more robust API here, but I'm not sure if there is a better alternative without a major overhaul of the ApolloStore. Which is something we want to do eventually, but for now at least, this might be the way to go.

I do think it's worth considering what is lacking around the didChangeKeys design and would probably still push for depreciating it for a clearer method.

Can you be more specific about what you think is lacking here? I'm not clear on what's missing. There are a couple of improvements I can think of:

  1. Allowing you to observe changes only to specific keys that you provide. Which is kind of possible now with a GraphQLQueryWatcher I guess, but it's not the intended use case.
  2. Allowing you to apply transformations to the data before it is written into the cache. Though this opens up an entirely different can of worms.

@jimisaacs
Copy link
Contributor Author

@AnthonyMDev

Believe me, I would prefer we could do that all the time as well... but every time we have decided to make minor breaking changes to provide a better API surface, we've gotten a lot of negative feedback from the community about it. It's fine to do when you're building for your own internal software, but with distributed open source packages, it becomes a pain point for a lot of people.

I understand what it means to maintain open source, the comment is about maintaining that separation for the benefit of the discussion here, not to ignore the concern altogether. For example, in the current approach it can be done without being a breaking change with a default delegate method.

Yeah, ApolloStoreSubscriber was an internal protocol that was only being used by the watcher. We made it public because your team requested it. But it was never a fully featured API designed for public consumption I the first place.

Really? Well then, this is news to me 😬 🦶 😶 . Do you know where, whom asked for this?

I actually don't think that's necessarily true. I think that many users will only care about one or two of these cases and ignore the rest.

I am in disagreement with you here, but it's probably over a nuance of the meaning of "consideration". Either I "consider" what methods to implement or not, or I "consider" what enum cases to handle or not. The enum approach enforces that consideration, the default delegate method approach does not. This is all I meant.

I'm not a huge fan of a delegate with 10 methods either, but I think we're prioritizing the trade-offs in the direction of non-breaking changes & clarity vs. ergonomics. I'd love to come up with a more robust API here, but I'm not sure if there is a better alternative without a major overhaul of the ApolloStore. Which is something we want to do eventually, but for now at least, this might be the way to go.

In terms of "trade-offs in the direction of non-breaking changes & clarity vs. ergonomics", I still disagree that an enum approach prioritizes breaking changes, as I can add a default method just as easily to this approach as to the 10 method one. Though if you think 10 methods prioritizes clarity over ergonomics, we can probably debate that for a while, I just don't think I'm not going to die on any hill for an enum or anything here 😜. I can see what the 10 method approach looks like, though I wonder, do you prefer updating this PR or creating a new one?

Can you be more specific about what you think is lacking here? I'm not clear on what's missing. There are a couple of improvements I can think of:

I think the naming is confusing and doesn't convey what it does. "Did change keys" intuitively means to me that something could have been added, removed, or updated, i.e. "change" is an ambiguous term for CRUD. When I was originally looking at understanding it, and assuming that the reason of change included more, it felt like it was missing conveying the "reason" in the method call. So then when I got to that point, I considered trying to add the "reason" to the call, and realized when looking at the code that it was only covering "merge" (i.e. create/update/write), and wasn't called for when something is removed (via pattern or not), cleared, or simply read from cache.

@jimisaacs
Copy link
Contributor Author

Regarding the question of creating a updating PR or creating a new one, I'm leaning toward just creating a new one. PRs can get pretty confusing with multiple passes of large sweeping changes + discussion over them.

@AnthonyMDev
Copy link
Contributor

Thanks for the discussion @jimisaacs. I didn't mean to imply you don't understand our open source concerns! My apologies.

I was mistaken, it wasn't your team who requested this. I misremembered that, but it was a recent request, and not something we created with the intention of it being public. See #300.

I still disagree that an enum approach prioritizes breaking changes, as I can add a default method just as easily to this approach as to the 10 method one.

Maybe I'm not understanding what you mean here. What would a default method do for this? I know we can use a default method to prevent a breaking change from occurring in this PR. My concern is that future changes would introduce breaking changes if a new Action was added. User's switch statements would no longer be exhaustive (unless they used a default case already, which is not required).

I definitely am not super excited about a delegate with 10 different methods, but it is the common pattern Apple has used for a long time, and I do think it is less likely that future changes are breaking.

"Did change keys" intuitively means to me that something could have been added, removed, or updated, i.e. "change" is an ambiguous term for CRUD.

Ahhh I see. That makes sense. I'm open to deprecating that method and changing the name to didUpdateKeys. But I think that with the current implementation that still would cover insertions, so I'm not sure if "update" is the appropriate term either? I think it's great that you are adding methods for removals too. Adding separate methods for insertions might be tricky though, because insertions really only happen as part of a "merge".

do you prefer updating this PR or creating a new one?

Whichever is easier for you is fine with me. If you prefer creating a new PR, that works for me. Thanks so much for contributing!

@Iron-Ham is going to be working on another feature that is going to be heavily dependent on having a robust store subscriber in the next couple weeks. It would be nice to wait until he's got something functioning to merge this in, just to see if his use case surfaces any additional needs that should be added to this protocol. But I don't want to hold you up waiting for that either.

@Iron-Ham
Copy link
Contributor

Iron-Ham commented May 7, 2024

@jimisaacs happy to chat through your use-case (via email, Slack, GitHub, etc.: whatever your preference – my personal email is directly available on my GitHub profile) and see where GitHub and Netflix alignments (and differences) can help inform the design of this API.

For us, the pain points I'm currently feeling are around optimistic state and query watchers. A user may take a number of actions in rapid succession that trigger both an optimistic update of the cache (or in memory model) as well as a network mutation. The network mutation often returns some fragment which includes the updated keys (and often other related keys that are part of a given fragment). If these mutations are being executed on a single model (or a list of models), we end up in an inconsistent state given that our UI is driven by cache updates.

My assumption is that we can handle this generically with any of the following options:

  • Given a version of your proposed changes (or something like it), a custom ApolloStoreSubscriber type which has hooks for whether we want to apply a given cache data state.
  • Some changes to how data is written to the cache to keep track of optimistic state (complicated, lots of pitfalls)
  • Some layer that tracks a set of mutations within a given context to defer updates to our models on a field-level for a given set of cache-key changes (possible today, perhaps not generically applicable? I'm going to explore this to some extent next week).

@jimisaacs
Copy link
Contributor Author

jimisaacs commented May 8, 2024

@Iron-Ham before going offline, can you clarify

If these mutations are being executed on a single model (or a list of models), we end up in an inconsistent state given that our UI is driven by cache updates.

I'm not sure how the single model or list of models inherently causes an inconsistent state?

Are your mutation responses not expected to have the same values as your optimistic updates?

@Iron-Ham
Copy link
Contributor

Iron-Ham commented May 8, 2024

I'm not sure how the single model or list of models inherently causes an inconsistent state?

I'm happy to clarify: the mutations responses return the expected values of a single optimistic update, but not a chain of optimistic updates that may have fired before the response returns.

To illustrate:

  • I rapidly execute five actions (ActionA, ActionB, etc), each with an optimistic update and a mutation (which in the case of a successful mutation, returns a fragment representing the model that changed).
  • At this point, I have five stacked optimistic mutations in the UI.
  • ActionB returns first, with a model state that does not include the optimistic updates of actions A, C, D, or E. This data is written to the cache, which wipes away the optimistic state of the four other actions. At this point, the user has gone from a state of all five actions being committed to the UI, to four of them wiped away.
  • As each mutation returns, one of the user's optimistic states returns to the UI (and to the data state).
  • After all mutations return, the UI is now "correct" again, and the data now matches the optimistic state.

This isn't dissimilar to the challenges of an online multiplayer game: The client is inputting actions faster than the server can confirm them. Unlike a multiplayer game, we are committing the intermediate states to the application's data state as opposed to performing a periodic sync/confirmation.

It's an interesting problem to solve for, and one that doesn't have a great built-in solution currently. I'll be dedicating some time to it next week. Very narrowly scoped mutation return values could mitigate this (at the cost of not updating the rest of the model) – but the concept of auditing >1,000 mutations across six different deployments of GitHub (Dotcom, various GHES versions) is just one that I'm not sure I can entertain.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
5 participants