Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Live on-demand wave rendering #345

Open
olofson opened this issue Oct 6, 2018 · 0 comments
Open

Live on-demand wave rendering #345

olofson opened this issue Oct 6, 2018 · 0 comments
Labels
feature Entirely new features multitdreading Issues related to distributing work across CPU threads

Comments

@olofson
Copy link
Owner

olofson commented Oct 6, 2018

TL;DR: Zero-latency direct-from-disk sample playback, but for rendered waves. Essentially #131 and #152 combined.

Add support for waves that are rendered on demand when referred to at compile/load time, but where only the first moments of these waves are actually rendered and cached, and the rest is rendered by worker threads and streamed to oscillators as the waves are actually played.

Note that for straight caching and sharing of rendering streams to work properly, these waves need to be entirely deterministic. (Random seeds still won't need to be locked down, though, as each cached wave would comes with a frozen VM state that is to be cloned by each worker thread that is to render the remaining part of the wave.)

"Fake" non-determinism can be implemented in the same way as for samples; round-robins, neighbor borrowing, layering of loops of different lengths, and various kinds of realtime processing. The only difference here is basically that the "samples" are generated on the fly, rather than being streamed from disk.

For non-deterministic waves, there are still a few options:

  1. Keep a queue of cached/streaming wave instances, where a new instance is grabbed for each use on the realtime side, and new instances are added by worker threads as needed.
    • Pro: Rendering CPU load still never impacts the realtime context, as all wave rendering is done by worker threads.
    • Con: There is a risk of running out of wave instances due to worker thread latency or overload.
  2. Start rendering waves in the realtime context while notifying the worker threads, and then switch to streaming as the worker threads are ready.
    • Pro: No prerendering or caching needed.
    • Pro: No running out of instances.
    • Con: Initial rendering CPU load hits the realtime context for non-deterministic durations.
    • Con: Realtime rendering duration will either have to be set long enough that the worker threads are guaranteed to be ready for streaming before that time expires, or the same buffers will have to be rendered on both sides until the worker threads have buffered enough for reliable streaming.
  3. Hybrid; do 1. normally, but fall back to 2. when running out of cached wave instances. The number of cached instances can be increased dynamically whenever bumping into the limit.
    • Pro: Avoids the problems of 2. "most of the time."
    • Con: Still hits 2. hard when large numbers of instances are needed - which is likely in situations where the CPU load is already high on both realtime and worker threads.
    • Con: Dynamically increasing the number of cached wave instances is difficult to do in a manner that serves all use cases well.
  4. Wave instance round-robin. (Basically just using an array of cached wave instances from the same wave script, so that the instances can have different random seeds.)
    • Pro: Business as usual; just more waves in the cache.
    • Con: There are only so many different variants of each wave.
@olofson olofson added the feature Entirely new features label Oct 6, 2018
@olofson olofson added the multitdreading Issues related to distributing work across CPU threads label Oct 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature Entirely new features multitdreading Issues related to distributing work across CPU threads
Projects
None yet
Development

No branches or pull requests

1 participant