Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WebRTC instead of WebSockets #274

Open
satcom-uhf opened this issue Oct 17, 2021 · 9 comments
Open

WebRTC instead of WebSockets #274

satcom-uhf opened this issue Oct 17, 2021 · 9 comments
Labels

Comments

@satcom-uhf
Copy link

Feature description
I think we can transfer waterfall data using WebRTC. Theoretically, it should be faster and, probably, more efficient. I'm not an expert though. Thank you.

Target audience
Wide-range SDRs (for example, 20 MHz)

@satcom-uhf satcom-uhf added the feature feature requests label Oct 17, 2021
@jketterl
Copy link
Owner

Is there any information available on why or in which way this would be better than websockets?

@sbridger
Copy link

sbridger commented Apr 18, 2022

I don't know anything about this, but that's not going to stop me commenting, as I am a StackOverflow Certified Professional

My interest is the buffering delay that makes remote sdrs unusable for holding a conversation, or trying to listen to both local and remote rxs at the same time. #293 (There is also the consequent effect for all users, that the UI feels very sluggish because changes are only heard after the audio buffer delay)

WebRTC is claiming 2 advantages:

  • P2P not via a server: This doesn't help us, as we are connecting to a server.
  • WebRTC is mainly UDP. Thus main reason of using WebRTC instead of Websocket is latency. With websocket streaming you will have either high latency or choppy playback with low latency. With WebRTC you may achive low-latency and smooth playback https://stackoverflow.com/a/32507788

@jketterl
Copy link
Owner

I don't see why everybody immediately thinks that UDP offers lower latency than TCP.

To stream data over UDP, you need to implement a custom system to enumerate the transmitted packages in order to be able to reassemble them in the correct sequence at the receiving end, and you need to do custom handling for missing packets and the corresponding retransmits. As soon as you've done all that, you've lost all the advantages of UDP... and you've reinvented the wheel because those things are already part of TCP.

@sbridger
Copy link

To be clear, I'm not knowledgable about WebRTC or the best way to do this, I'm just interested in getting a usable amount of delay. I am only scraping Stackoverflow for suggestions.

To stream data over UDP, you need to implement a custom system to enumerate the transmitted packages in order to be able to reassemble them in the correct sequence at the receiving end, and you need to do custom handling for missing packets and the corresponding retransmits. As soon as you've done all that, you've lost all the advantages of UDP... and you've reinvented the wheel because those things are already part of TCP.

Perhaps, in practice, you mostly get packets in sequence, and if you don't have a packet when the audio needs it, then you just play a packet of silence. Late / out of sequence packets are dumped. Yes if you then wait 5 secs for a missing packet, it will take 5 secs either way. For my use case, dropouts are better than long delay.

I don't see why everybody immediately thinks that UDP offers lower latency than TCP.

As I understand it (and I might be wrong), TCP will request missing packet retransmission automatically (and therefore must wait for them to come back) and UDP won't. This would mean that TCP must have a large embedded buffer to wait for retransmissions. Thus you don't have the option of ignoring missing packets in TCP/Websockets. It would always be slower in the presence of missing/delayed packets.

Key here is that, however they do it, talk apps like voip and skype web apps are now actually able to have low latency audio without much in the way of dropouts. They certainly never end up with multi-second delay, and they do just dropout when the wifi signal is bad.

This says

  • WebSockets: Only supports reliable, in-order transport because it is built on TCP. This means packet drops can delay all subsequent packets.
  • WebRTC: Transport layer is configurable with application able to choose if connection is in-order and/or reliable.

If TCP's in-order + no-missing-packets, is the root of delay and UDP the solution, this says WebRTC is the only game for the browser.

There seem to be a lot of VOIP and Audio things built on WebRTC e.g. GStreamer. Perhaps it is not WebRTC that should be used but a higher level voip or streaming protocol that uses it.

@jketterl
Copy link
Owner

I think you're focusing too much on your use case. Switching to a VOIP approach will have negative effects on users that listen to radio stations, for example. As such, I don't think that's a good solution.

@sbridger
Copy link

sbridger commented Apr 19, 2022

I think you're focusing too much on your use case.

True, but you challenged me to join this thread, and the broadcast use case works fine (except for the perceived sluggish UI issue).

There is nothing that says the same solution is needed for realtime or broadcast. They are distinct use cases, and could use different transport. It is no problem if the user has to chose the mode or the amount of buffering that suits them. The broadcast case could keep the current system if that works best.

Using as a remote receiver with a local transmit and rx is a major use case, and I would imagine that a lot of the websdr's are run by hams for whom it is a significant use case. The fairly small (and here at least, falling number) of SDR's might be a clear indication that they don't in fact work well for the people who would deploy them.

Of course TCP itself might not be the underlying cause of the excessive delays . It might well be fixable other ways.
Delays of 0.5-1sec would greatly increase the usability from the present 2-4sec. I can't see why TCP shouldn't be able to do that.

That said, when the delay gets sufficiently small you might be able to do what I currently do a with an analog remote: I have stereo headphones and one ear gets the local rx, and the other the remote rx. It is great for fading. Doing that with two SDR's would be excellent.

Voice quality is measured according to the ITU model; measurable quality of a call degrades rapidly where the mouth-to-ear delay latency exceeds 200 milliseconds.

Ping time to the remote SDR down country is currently 26ms from here.
Getting <200ms antenna->speaker latency might be best done by UDP/WebRTC

WebRTC transport layer is configurable with application able to choose if connection is in-order and/or reliable.

So perhaps WebRTC is able to do both jobs, and might be the correct long term solution. (I don't know enough about it)

@fabianfrz
Copy link

Hi, I would like to add my knowledge here. WebRTC is there to connect two clients and requires infrastructure to do that. This means that you need an additional server that tells client a that client b is reachable via x.

Websocket in contrast to that is designed to be used for communication between a client and a webserver.
If you really think that UDP is bringing a lot (with a huge window size it has no huge difference to UDP btw.), you should have a look at HTTP/3, which is based on QUIC. It also contains a technology that acts like an UDP version of Websockets. However it is questionable, how long it will take to be widely supported.

@sbridger
Copy link

you need an additional server that tells client a that client b is reachable via x.

presumably, in this case A is told by B that it is reachable at B. No third server required.


BTW further experiments make the underlying issue appear to be that TCP does not lose packets. Any lost/delayed packets stretch the buffering out to a max of about 4-5 secs. Thereafter it remains at this max value. I am not sure if that is a function of the IP stack in windows, or simply of the fact that the webrx is pushing out audio data at exactly the rate the browser consumes it, and so the buffer can never shrink back.
Perhaps there is an interim fix of a flush button in the UI?

@fabianfrz
Copy link

presumably, in this case A is told by B that it is reachable at B. No third server required.

nope, thats a different protocol (stun) that is not HTTP as far as I know.
You can read more about WebRTC on the MDN: https://developer.mozilla.org/en-US/docs/Web/API/WebRTC_API

@jketterl jketterl added idea and removed feature feature requests labels Mar 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants