Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to build LabSound projects with Emscripten? #145

Open
DanieleCapuano opened this issue Nov 13, 2020 · 18 comments
Open

How to build LabSound projects with Emscripten? #145

DanieleCapuano opened this issue Nov 13, 2020 · 18 comments

Comments

@DanieleCapuano
Copy link

DanieleCapuano commented Nov 13, 2020

I succeded building examples using
emcmake cmake -DCMAKE_INSTALL_PREFIX='../labsound-distro-emcc' ..
cmake --build . --target install

but when I try to run the output .js file I get a console error:
"RtApiDummy: This class provides no functionality".

@meshula
Copy link
Member

meshula commented Nov 13, 2020

A WebAudio back end would need to be written for that. If miniaudio works with Emscripten, then the miniaudio back end might work. The logic at the top of this file: https://github.com/LabSound/LabSound/blob/master/cmake/LabSound.cmake would need to be modified to know how build for Emscripten. If this works, a PR would be welcome!

@imiskolee
Copy link

I am thinking direct port native Web Audio API

@meshula
Copy link
Member

meshula commented Jul 2, 2021

Yes, that makes sense.

@Palmer-JC
Copy link

I am very interested in this. Microsoft has a webgl framework, BabylonJS, that is also has community contributors. (I write / support a Blender exporter). They also have been developing something called BabylonNative, https://github.com/BabylonJS/BabylonNative. It allows running the Javascript that comprises a 3D scene as a Native application on Windows, Android, Linux, MacOS & iOS devices / XR headsets.

Getting web audio is a glaring piece missing. I have started to try to wrapper Labsound into its add-in facility, but did not get very far. As there is a javascript VM inside the app, the enscripten route sounds even cleaner.

@meshula
Copy link
Member

meshula commented Jul 2, 2021

Integration into BabylonJS is an interesting idea. There are already some js bindings for LabSound, perhaps the node3d bindings by @raub could provide some insight into how to do it: https://github.com/node-3d/webaudio-raub

@Palmer-JC
Copy link

Thanks, pointing out that N-API implementation was really helpful. I mentioned this to Microsoft, and they really liked it. The preliminary from them was this is just what is needed in the src & js directories. The person wondered how this would fit into their cmake / sub-module build framework.

Think the project lead is out this week due to holiday added onto with vacation, who will need to look at it. Sounds like they might do everything themselves, which is fine by me.

thanks, again.

@meshula
Copy link
Member

meshula commented Jul 8, 2021

Sound good! I'm more than happy to accept PRs or Issues from them here.

@meshula
Copy link
Member

meshula commented Dec 3, 2021

This issue can move forward when this issue is resolved: WebAudio/web-audio-api#2442 If the emscripten patches by @juj noted there land, an emsc backend for LabSound will be straight forward.

@Avataren
Copy link
Contributor

It seems like it should be possible to create a ScriptProcessorNode or AudioWorkletNode, where you pull the graph and render the buffer in some call exposed though web-assembly and copy it, during the onaudioprocess callback on the web node.

@meshula
Copy link
Member

meshula commented Sep 11, 2023

That's a very interesting idea. I see some things around WASM and AudioWorkletNode, e.g. https://emscripten.org/docs/api_reference/wasm_audio_worklets.html ~ I'm very curious how that's implemented. It looks like it should be possible to use LabSound in conjunction with the methods they describe there; although at first glance I can't tell how much work is involved.

@Avataren
Copy link
Contributor

Another option might be to simply make a OpenAL backend, as emscripten has it's own OpenAL port that works with web audio. Using double buffering and streaming buffers is probably the way to go here, which would increase the latency, but might be worth looking into.

@Avataren
Copy link
Contributor

SDL_Audio might also be a viable backend for web.

@juj
Copy link

juj commented Sep 14, 2023

The bug reported to Web Audio in WebAudio/web-audio-api#2442 is not necessary for using Audio Worklets with Emscripten. Support for Audio Worklets with Emscripten has already landed, and the Web Audio API issue 2442 is more of a performance optimization, simplification and code size improvement to the existing integration.

@Avataren
Copy link
Contributor

Avataren commented Sep 14, 2023

As I recall, an audio worklet will require the whole process to run in that "worker context", making any communication with the main thread a bit complicated.

Ideally the emscripten code runs in the main thread, and expose the audio graph through a SharedArrayBuffer that the worklet can access to render the graph.

This way you can make a whole application using opengl/webgl and LabSound, and simply pick a native target or emscripten at compile time, and everything should just work, even c++ threading.

Some glue will need to be added on the js side to facilitate this of course, but it could be added with emscripten.

@Q-Qian
Copy link
Contributor

Q-Qian commented Apr 10, 2024

Is there any way to compile LabSound to wasm now? waiting for good method

@meshula
Copy link
Member

meshula commented Apr 12, 2024

Reviewing all the options, a backend based on libSDL seems like right answer. I think emsc is well integrated with sdl2. If anyone is working on an sdl backend and has got progress to share, that would be most welcome.

@meshula
Copy link
Member

meshula commented Apr 14, 2024

Looks like a backend won't be too difficult. I'm not able to start on this at the moment, but if someone wants to have a look, I think the miniaudio back end could be modified easily, a trivial SDL set up looks like this:

#include "sdlkit.h"

static void SDLAudioCallback(void *userdata, Uint8 *stream, int len)
{
	if (playing_sample && !mute_stream)
	{
		unsigned int l = len/2;
		float fbuf[l];
		memset(fbuf, 0, sizeof(fbuf));
		FetchSamples(l, fbuf, NULL);
		while (l--)
		{
			float f = fbuf[l];
			if (f < -1.0) f = -1.0;
			if (f > 1.0) f = 1.0;
			((Sint16*)stream)[l] = (Sint16)(f * 32767);
		}
	}
	else 
		memset(stream, 0, len);
}

void initSDLAudio() {
	SDL_AudioSpec des;
	des.freq = 44100;
	des.format = AUDIO_S16SYS;
	des.channels = 1;
	des.samples = 512;
	des.callback = SDLAudioCallback;
	des.userdata = NULL;
	VERIFY(!SDL_OpenAudio(&des, NULL));
	SDL_PauseAudio(0);
}

@Q-Qian
Copy link
Contributor

Q-Qian commented Apr 15, 2024

I‘ll try it soon, thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants