Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proper music and sound design showcase #352

Open
olofson opened this issue Oct 10, 2022 · 0 comments
Open

Proper music and sound design showcase #352

olofson opened this issue Oct 10, 2022 · 0 comments
Labels
documentation enhancement Minor features, and improvements on existing functionality feature Entirely new features

Comments

@olofson
Copy link
Owner

olofson commented Oct 10, 2022

However "interesting," I feel the current demos are kind of rubbish, and are at best vaguely hinting towards the capabilities of Audiality 2. This needs to be corrected.

Plan:

  1. Basic VST plugin, or at least a decent virtual MIDI solution, so one can wire A2 to a DAW for some proper music composition.
  2. Rudimentary live A2S editor for quick, interactive editing.
  3. Live monitor tool, with graph visualization and performance metering.
  4. Create a bunch of sound effects/objects, showcasing the parametric capabilities, and infinite variations made possible by real time synthesis. Ideas:
    • Explosions.
    • Engines, with a "lively" nature, and proper throttle and load response.
    • Weapons.
    • Structured, parametric ambiences.
    • Modeled, parametric footsteps.
    • Semi-structured music, combining traditional audio tracks and samples with live synthesis.
    • Interactive music.
  5. Tech demos, showcasing unique key features of A2:
    • User defined mixer/bus/track/voice/event... structure. "Build Your Own Engine."
    • Lightweight voices, with sub-sample accurate timestamped events. Demonstrate how approaches that will bring other middlewares to their knees (extreme event rates, timestamping, high voice counts, ...) could be perfectly viable for prototyping, or even production, with A2.
    • Lightweight sub-sample accurate scripting, allowing the implementation of complex interactive sound objects, and even custom synthesis algorithms, without having to resort to custom plugins.
    • Worker threads to distribute music, ambiences, and other "high latency" audio over multiple CPU cores.
    • Offline rendering, using the same assets as for real time playback, to easily create infinitely complex audio without the need for manual bouncing.
    • Using offline rendering to automate the creation of LOD levels for complex sound designs - like SpeedTree™ for audio.
  6. Wrap it all into some sort of interactive "game," using some 3D engine. UE springs to mind, but if we're using Godot for the authoring tool, just using that for the demo(s) as well might make more sense.

The tests already cover some of these concepts, but they're very minimal, audio-only (no GUIs or anything), and IIRC, the only "documentation" is brief explanations in the form of code comments.

@olofson olofson added enhancement Minor features, and improvements on existing functionality feature Entirely new features documentation labels Oct 10, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation enhancement Minor features, and improvements on existing functionality feature Entirely new features
Projects
None yet
Development

No branches or pull requests

1 participant