Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Signal.subtle.requestSettledCallback #185

Open
robbiespeed opened this issue Apr 18, 2024 · 14 comments
Open

Add Signal.subtle.requestSettledCallback #185

robbiespeed opened this issue Apr 18, 2024 · 14 comments

Comments

@robbiespeed
Copy link

robbiespeed commented Apr 18, 2024

Proposal

Add requestSettledCallback to the subtle namespace. This function would schedule a callback to run the next time graph propagation completes (everything has marked dirty), after a signal.set() call before it yields back to the program. It functions similar to requestIdleCallback in that it calling requestSettledCallback inside the executing settled queue will add the callback to the next queue, not the current one. This is unlike queueMicrotask which appends to the current running queue if there is one.

namespace Signal {
  namespace subtle {
    function requestSettledCallback(callback: () => void): void;
  }
}

Open to other suggestions for naming too, though I think the requestXCallback is probably a good framework to follow.

Motivation / Use Case

There are a few open issues I believe this would solve:

Synchronous Effects

Userland synchronous effects could be implemented with userland batch and the use of requestSettledCallback.

let pending = false;
let isInBatch = false;
const batchQueue: (() => void)[] = [];

let w = new Signal.subtle.Watcher(() => {
  if (!pending) {
    pending = true;
    const cb = () => {
      pending = false;
      for (let s of w.getPending()) s.get();
      w.watch();
    };
    if (isInBatch) {
      // At the end of batch `cb` will run
      batchQueue.push(cb);
    } else {
      // At the end of state.set() `cb` will run
      Signal.subtle.queueSettled(cb);
    }
  }
});

export function batch (cb: () => void) {
  isInBatch = true;
  cb();
  isInBatch = false;
  let next = batchQueue.pop();
  while (next) {
    next();
    next = batchQueue.pop();
  }
}

export function synchronousEffect(cb: () => void) {
  let destructor;
  let c = new Signal.Computed(() => { destructor?.(); destructor = cb(); });
  w.watch(c);
  c.get();
  return () => { destructor?.(); w.unwatch(c) };
}

Batching isn't strictly necessary here, but for environments where effects (or subscriptions like Preact) happen synchronously it's a helpful mechanism to avoid unnecessary runs.

Selector

Selectors which avoid double re-renders could be implemented.

Double rerender happens if the synchronization to child signals happens asynchronously, because the view may render the input or something derived from it, as well as the child signals. Having a synchronous mechanism to do the setting of the child signals prevents this.

export function createSelector<TIn, TOut>(
  input: Signal<TIn>,
  mapper: (isSelected: boolean) => TOut
): (value: TIn) => Signal<TOut> {
  const childSignals = new Map<TIn, WeakRef<Signal<TOut>>>();
  const setters = new Map<TIn, (out: TOut) => undefined>();
  const registry = new FinalizationRegistry((value: TIn) => {
    childSignals.delete(value);
  });

  let store = input.unwrap();

  const handleChange = () => {
    const nextValue = input.unwrap();
    if (store === nextValue) {
      return false;
    }

    childSignals.get(store)?.deref()?.set(mapper(false));
    childSignals.get(nextValue)?.deref()?.set(mapper(true));
    store = nextValue;

    return false;
  }

  const w = new Signal.subtle.Watcher(() =>
    Signal.subtle.queueSettled(handleChange)
  );
  w.watch(input);

  return function selector(value: TIn): Signal<TOut> {
    let selectedSignal = childSignals.get(value)?.deref();
    if (selectedSignal !== undefined) {
      return selectedSignal;
    }

    selectedSignal = new Signal.State(mapper(store === value));
    childSignals.set(value, new WeakRef(selectedSignal));
    registry.register(selectedSignal, value);

    return selectedSignal;
  };
}

Alternatives

A. Allow untracked get inside notify

If we allow this using the existing algorithm (notify runs as it's producers are dirtied, during the dirtying phase), what we get is instability in the graph. The notify callback could mark something clean which was just marked dirty, then it can get marked dirty again, triggering the same notify callback, and so on. As an example of that you can check-out this jsbin which allows get in notify. What you see is not only that the notify is called multiple times, but the underlying computed being watched also recomputes multiple times with incorrect state, then ends computing with correct state (after all it's producers have finally been dirtied).

B. Schedule notify to run after graph has finished dirtying

Instead of executing notify callbacks as producers are dirtied, push them to a queue and run that queue when dirtying is done (the same place callbacks queued via requestSettledCallback would have run).

This is safe, and won't have the same downsides as the alternative A.

There are minor drawbacks however:

  • Callbacks that were going to be queued via requestAnimationFrame or queueMicrotask end up doubly queued. Once via a wrapper callback that gets queued to run after dirtying phase, and again after the callback goes into the final queue. Ex:
let pending = false;
const w = new Watcher(
  // this is queued to run at a safe time after all nodes are dirtied (1st Queue)
  () => {
    if (pending) return;
    pending = true;
    // This is the thing we actually want to run, but we need to append it to a queue (2nd Queue)
    const cb = () => {
      pending = false;
      for (let s of w.getPending()) s.get();
      w.watch();
    };
    queueMictotask(cb);
  }
);
  • Only notify callbacks can be scheduled directly into the Watcher queue. This makes it difficult (but still possible) to schedule polling like mechanisms for after any state change in the graph.
  • Lack of symmetry. With requestSettledCallback to change a reaction from asynchronous (queueMicrotask) to synchronous or vice versa is as simple as replacing the scheduling function being called.
@NullVoxPopuli
Copy link
Collaborator

the next time graph propagation completes

when does this happen? / what is this process? 😅

when you .get() a value, you synchronously calculate the value, no propagation 😅

when you .set() a value, watchers would know (if they consumed anything that depends on what was set) and would re-run (not 100% on timing here -- I should read the code again).

@robbiespeed
Copy link
Author

robbiespeed commented Apr 20, 2024

@NullVoxPopuli propagation along the graph is not the same as recomputation. It's the stage that dirties consumers. In the case of this polyfill that happens first by incrementing a global clock, then explicitly dirtying live consumers.

You'd want to flush the queue after this line right here

producerNotifyConsumers(node);
.

@dead-claudia
Copy link
Contributor

Could you walk through the order of operations in .set() with this proposal? I'm struggling to see how it differs from just doing stuff at the end of the watcher.

@robbiespeed
Copy link
Author

@dead-claudia consider the following graph:
Screenshot from 2024-04-21 14-38-55

Calling S.set(value) assuming value is different will do the following:

  • store the new value in S
  • increment internal version on S
  • increment global clock (used for dirty checking of non-live consumers)
  • recursively walk live consumer graph and mark things non dirty nodes as dirty (and run notify callback of watchers):
    • B dirty
    • D dirty and run notify
    • E dirty and run notify
    • C dirty
    • F dirty and run notify
  • run queued callbacks from requestSettledCallback

The problem with doing signal.get() inside watcher notify callback, is that there's no guarantee everything will be in a stable state, hence why it is not allowed during this stage. Technically set() could safely be allowed at this stage since it would only mark previously not dirty things as dirty.

Adding a queue that runs after dirtying/notify phase allows for making safe synchronous effects.

Another possibility is to have all wathcers delay running notify callbacks till the end (same place I'm proposing the settled queue runs). However that adds unnecessary overhead for watchers that are going to schedule something asynchronously using queueMictotask or something like it.

@dead-claudia
Copy link
Contributor

dead-claudia commented Apr 21, 2024

The problem with doing signal.get() inside watcher notify callback, is that there's no guarantee everything will be in a stable state, hence why it is not allowed during this stage.

@robbiespeed To clarify, what do you mean by (not) "stable"? Trying to tease out the underlying assumptions here.

@robbiespeed
Copy link
Author

robbiespeed commented Apr 21, 2024

@dead-claudia "stable" in this context means all consumers are marked dirty.

Watcher notify runs at a non-stable phase, if allowed to read signals during this phase, they might read something that was just marked dirty, before all of that things other sources have been marked dirty. Notify disables reads for this reason, but for illustration let's say it doesn't and walk through an example of a synchronous effect using a watcher.

let w = new Signal.subtle.Watcher(() => {
  for (let e of w.getPending()) e.get();
  w.watch();
});

function effect(cb) {
    let destructor;
    let e = new Signal.Computed(() => { destructor?.(); destructor = cb(); });
    w.watch(e);
    e.get();
    return () => { destructor?.(); w.unwatch(c) };
}

const a = new Signal.State(1);

const b = new Signal.Computed(() => a.get() + 1);
const c = new Signal.Computed(() => a.get() * 2);
const d = new Signal.Computed(() => c.get() - 1);

effect(() => {
  console.log(a.get(), b.get(), c.get(), d.get());
});
// logs: 1, 2, 2, 1

a.set(2);
// logs: 2, 3, 2, 1  -- Glitch: c and d still have old values
// logs: 2, 3, 3, 2

Why does this happen? Because the watcher is notified first before c or d get dirtied, then again after each node used in the effect is dirtied.

  • b dirty
    • e dirty
      • watcher of e notified
      • e.get() called, marking it not dirty
  • c dirty
    • d dirty
      • e dirty
        • watcher of e notified
        • e.get() called, marking it not dirty
    • e dirty check, but all sources are not dirty so it sets as not dirty
  • e dirty check, but all sources are not dirty so it sets as not dirty

First off one problem is notify is going to get called multiple times.

Secondly the glitches. Whether you do a depth firth (currently what's in the polyfill) or a breadth first walk, there will be scenarios where nodes on the graph can be accessed before being in a stable state, unless all notification is queued until after dirtying which as stated in my previous comment adds unnecessary overhead for notifiers that were going to append a callback to an async queue anyway. There are ways you could stop the effect from running multiple times without queueing till the end, but then you still risk ending up with a glitchy run.

Edited to remove re-runs of effect after all sources are clean, since as far as I understand the polyfill implementation does additional dirty checks. That's not a guarantee for all implementations though, and there's overhead in doing that check.

@dead-claudia
Copy link
Contributor

dead-claudia commented Apr 23, 2024

@robbiespeed I thought watchers were only called on the immediately-set signal? And it makes little sense to call watchers that aren't attached to either the state or any of its parents. Where's the other watcher calls coming from?

Edit: narrow the question down to make more sense.

@robbiespeed
Copy link
Author

@dead-claudia the watcher is attached to the e Computed, e is fluctuating from dirty to not dirty. The first time it's marked as dirty it notified the watcher, which internally calls e.get() to run the effect, which marks e clean. But e had other sources which hadn't been marked dirty yet and once they are e gets marked dirty again (notifying watcher), and so on. All of this is happening "immediately" before set returns. I must preface again this is to illustrate what would happen if notify allowed reads.

@dead-claudia
Copy link
Contributor

dead-claudia commented Apr 23, 2024

@robbiespeed The watcher addition is supposed to be deduplicated. The polyfill in this repo has an outstanding bug where it's currently not. #157

If you're observing multiple watcher notify calls, either that or similar is why. It's likely propagating or otherwise linking up sources without properly deduplicating them before it calls them. And that would be the ultimate source of the problem.

@robbiespeed
Copy link
Author

robbiespeed commented Apr 23, 2024

@dead-claudia that is not what I'm referring to. In this case the watcher only has a single source e, but that source will get repeatedly dirtied if reads were allowed during notification phase. I suggest you look here to see how dirtiness propagates to live nodes:

export function producerNotifyConsumers(node: ReactiveNode): void {
And here to see what happens to dirty values that get accessed:
export function producerUpdateValueVersion(node: ReactiveNode): void {

I've also created a JSBin which removes the notification phase check during producerAccessed function (allows reads during notify). The repeated glitching recomputation of e is actually worse, since it seems b,c,d are dirtied in order after each access of e.
https://jsbin.com/powuqokitu/edit?js,console

@sorvell
Copy link

sorvell commented Apr 24, 2024

Is it clear why the Watcher notification callback just doesn't run at the graph-safe time discussed here?

I haven't been able to get a good answer to that.

For reference, polyfill-implementation-wise, this "spot" seems to be exposed here.

@robbiespeed
Copy link
Author

@sorvell do you mean queue all notifications until dirtying is finished? I mention that above, but I'll update the first post to include it under an Alternatives sections.

Another possibility is to have all wathcers delay running notify callbacks till the end (same place I'm proposing the settled queue runs). However that adds unnecessary overhead for watchers that are going to schedule something asynchronously using queueMictotask or something like it.

I'll expand on that a bit. You'd essentially be double queuing logic in the majority of cases. Some wrapper callback would first go into the "safe notify" queue, then once executed there it would schedule the wrapped callback into the users desired async queue like requestAnimationFrame, queueMictotask, etc. Ex:

let pending = false;
const w = new Watcher(
  // this is queued to run at a safe time after all nodes are dirtied (1st Queue)
  () => {
    if (pending) return;
    pending = true;
    // This is the thing we actually want to run, but we need to append it to a queue (2nd Queue)
    const cb = () => {
      pending = false;
      for (let s of w.getPending()) s.get();
      w.watch();
    };
    queueMictotask(cb);
  }
);

Where as adding requestSettledCallback means there's no queue by default and you can achieve either synchronous or asynchronous scheduling with as little as 1 queue. There's also nice symmetry that the same code you'd write to schedule a reaction asynchronously matches what you'd write for a synchronous reaction.

@dead-claudia
Copy link
Contributor

@robbiespeed That feels like a bug waiting to happen, and I was going to try to see if similar happens with this sequence, but I ran into #192:

const WA = new Signal.subtle.Watcher(() => {
  console.log("watch WA")
  WC.watch(C)
})

const WC = new Signal.subtle.Watcher(() => {
  console.log("watch WC")
  B.get() // not watched by `WC`
})

const A = new Signal.Computed(() => {
  console.log("get A")
  B.get()
  C.get()
  console.log("get A end")
})

const B = new Signal.State(1)
const C = new Signal.State(1)

A.get()
WA.watch(A)

B.set(2)
C.set(2)

@robbiespeed
Copy link
Author

@dead-claudia it's a bug waiting to happen if it were allowed, but thankfully it's not allowed. Though I do think the Watcher API could probably use an overhaul as it seems too easy to produce user error. Ex: It makes sense internally why watcher.watch() works inside notify, but watcher.watch(signal) doesn't. watcher.watch() should probably be renamed watcher.reset().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants