Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update to pion v3 #21

Merged
merged 17 commits into from
Feb 15, 2021
Merged

update to pion v3 #21

merged 17 commits into from
Feb 15, 2021

Conversation

mbattista
Copy link
Contributor

Hi,

another pull request.
This time an update to pion v3.

While the Update is complete and could be merged already, I want to integrate some kind of "Nack" handling hoping this will fix nurdism/neko#48
Let me know if i should do this in a different pull request or if this can be done in the same one.

Nonetheless setting a bitrate for h264 made the webrtc stream less 'jumpy' which lead to less lossage of udp packages, which lead to a less laggy stream over the internet (on a local network all is smooth)

I didnt bump the version in this pull request. If you think, this should be a higher version, let me know.

client/src/neko/base.ts Outdated Show resolved Hide resolved
@m1k1o
Copy link
Owner

m1k1o commented Feb 14, 2021

Hey, thanks! This looks very good. Let me try it out, do some light review and then I guess, we can merge it. For the "Nack" handling I guess, new PR would be good.

@m1k1o
Copy link
Owner

m1k1o commented Feb 14, 2021

Why is here being created another data channel, if right under is triggered event, that fetches data channel from user? In that case, user doesn't need to create own channel here. Right now, two data channels are created but only one is being used? Do I understand it right?

I'd suggest to add ondatachannel event for client, where it listens for servers data channel, and remove server's OnDataChannel.

This could be then simpified as this:

	dataChannel, err := connection.CreateDataChannel("data", &webrtc.DataChannelInit{
		Negotiated: &negotiated,
	})

	if err != nil {
		return "", manager.config.ICELite, manager.config.ICEServers, viderr
	}

	dataChannel.OnMessage(func(msg webrtc.DataChannelMessage) {
		if err = manager.handle(id, msg); err != nil {
			manager.logger.Warn().Err(err).Msg("data handle failed")
		}
	})

@mbattista
Copy link
Contributor Author

https://github.com/pion/webrtc/wiki/Release-WebRTC@v3.0.0
"A data channel is no longer implicitly created with a PeerConnection"

Without the creation of the standard DataChannel Remote Control has not worked for me.

server/internal/gst/gst.go Outdated Show resolved Hide resolved
server/internal/gst/gst.go Outdated Show resolved Hide resolved
@m1k1o
Copy link
Owner

m1k1o commented Feb 14, 2021

https://github.com/pion/webrtc/wiki/Release-WebRTC@v3.0.0
"A data channel is no longer implicitly created with a PeerConnection"

Without the creation of the standard DataChannel Remote Control has not worked for me.

That should be fine on server side, where data channel would be created. Client would just wait for a channel, not creating one. I'm going to try to implement it that way.

@m1k1o
Copy link
Owner

m1k1o commented Feb 14, 2021

This would be my proposal:

diff --git a/client/src/neko/base.ts b/client/src/neko/base.ts
index 9128af4..ec5ea26 100644
--- a/client/src/neko/base.ts
+++ b/client/src/neko/base.ts
@@ -142,8 +142,8 @@ export abstract class BaseClient extends EventEmitter<BaseEvents> {
     }
 
     // @ts-ignore
-    if (typeof buffer !== 'undefined') {
-      this._channel!.send(buffer)
+    if (typeof buffer !== 'undefined' && typeof this._channel != 'undefined') {
+      this._channel.send(buffer)
     }
   }
 
@@ -211,14 +211,10 @@ export abstract class BaseClient extends EventEmitter<BaseEvents> {
     }
 
     this._peer.ontrack = this.onTrack.bind(this)
+    this._peer.ondatachannel = this.onDataChannel.bind(this)
     this._peer.addTransceiver('audio', { direction: 'recvonly' })
     this._peer.addTransceiver('video', { direction: 'recvonly' })
 
-    this._channel = this._peer.createDataChannel('data')
-    this._channel.onerror = this.onError.bind(this)
-    this._channel.onmessage = this.onData.bind(this)
-    this._channel.onclose = this.onDisconnected.bind(this, new Error('peer data channel closed'))
-
     this._peer.setRemoteDescription({ type: 'offer', sdp })
     this._peer
       .createAnswer()
@@ -279,6 +275,15 @@ export abstract class BaseClient extends EventEmitter<BaseEvents> {
     this[EVENT.TRACK](event)
   }
 
+  private onDataChannel(event: RTCDataChannelEvent) {
+    this.emit('debug', `received datachannel from peer`, event)
+
+    this._channel = event.channel
+    this._channel.onerror = this.onError.bind(this)
+    this._channel.onmessage = this.onData.bind(this)
+    this._channel.onclose = this.onDisconnected.bind(this, new Error('peer data channel closed'))
+  }
+
   private onError(event: Event) {
     this.emit('error', (event as ErrorEvent).error)
   }
diff --git a/server/internal/webrtc/webrtc.go b/server/internal/webrtc/webrtc.go
index 2fd4e7f..10b1b45 100644
--- a/server/internal/webrtc/webrtc.go
+++ b/server/internal/webrtc/webrtc.go
@@ -120,16 +120,16 @@ func (manager *WebRTCManager) CreatePeer(id string, session types.Session) (stri
 	if err != nil {
 		return "", manager.config.ICELite, manager.config.ICEServers, err
 	}
-	negotiated := true
-	connection.CreateDataChannel("data", &webrtc.DataChannelInit{
-		Negotiated: &negotiated,
-	})
-	connection.OnDataChannel(func(d *webrtc.DataChannel) {
-		d.OnMessage(func(msg webrtc.DataChannelMessage) {
-			if err = manager.handle(id, msg); err != nil {
-				manager.logger.Warn().Err(err).Msg("data handle failed")
-			}
-		})
+
+	dataChannel, err := connection.CreateDataChannel("data", nil)
+	if err != nil {
+		return "", manager.config.ICELite, manager.config.ICEServers, err
+	}
+
+	dataChannel.OnMessage(func(msg webrtc.DataChannelMessage) {
+		if err = manager.handle(id, msg); err != nil {
+			manager.logger.Warn().Err(err).Msg("data handle failed")
+		}
 	})
 
 	// Set the handler for ICE connection state

But i just found out, that your solution with negotiated: true just uses that one datachannel, not creating multiple. So there is no reason to modify it.

@m1k1o
Copy link
Owner

m1k1o commented Feb 14, 2021

After unexpected reconnect, I get this spam in the logs (many times a second). I think, it has something to do with the new ICE Restart feature. It does not close connection on faliure. So I guess, we need to do it ourselves now.

image

@mbattista
Copy link
Contributor Author

After unexpected reconnect, I get this spam in the logs (many times a second). I think, it has something to do with the new ICE Restart feature. It does not close connection on faliure. So I guess, we need to do it ourselves now.

image

It is continuing, I guess on

Uuug, good find.
I did not run any reconnection tests.

I will look after this later this night.

@m1k1o
Copy link
Owner

m1k1o commented Feb 14, 2021

I'll make just slight modification. E.G. clockRate is not needed anymore, so I'll purge that. And also add bitrate to VP8. Just some small tweaks.

@mbattista
Copy link
Contributor Author

After going though the test of pion/interceptor#4 again, I saw that this already has the buffer and should theoretically resend the packages on nack (https://github.com/pion/interceptor/blob/master/pkg/nack/responder_interceptor.go#L67 ). So I removed all code which had to do with nacks.
Since I do not know where in chrome / firefox I can see how many nacks has been answered I am not sure if this has any effect at all. I do not see any logging within the "resendPackets", so I really dont know how many if at all packages are resend.

But I found another bug, which should be addressed and perhaps you have some idea how to change it.
Without specific framerate gstreamer seems to default to 25fps while encoding a raw video. Since the user can change the framerate though the frontend, it would be nice, if this would have real consequences.
Do you have any ideas how we could change the gstreamer parameters without stopping and starting the pipeline?

The lags (which are the main bug I tried to fix) are still persistent even with the nack interveptor. Perhaps the bug is somewhere in pion webrtc, since I can reproduce it in https://github.com/GRVYDEV/Project-Lightspeed.
Another funfact is, that EVERYONE who is connected over WAN seems to lag at the same positions in the video, while in LAN there is not any lag.
I will try tomorrow if a switch to NewTrackLocalStaticRTP will have any effect.

Can you check if the the last commit fixed the ICERestart problem or if there also has to be changes in the frontend?

@m1k1o
Copy link
Owner

m1k1o commented Feb 14, 2021

Great. Less code == less bugs. Maybe chrome://webrtc-internals/ for chrome could help you get more insight about what is happening in the background.

Without restarting the pipeline, i don't think it is posibble. But it could be restarted in the same way, as as it is restarted for screen size change. And getting those fps from screen configuration.

With that ICERestart it is not working at all. I guess, that is meant for creating a new response but only in case, when there is already one existing. I get this error, and that's it.

image

I will look into this tomorrow, how can this problem be adressed.

I observed those minor lags, but I thought, it is just causing my internet connection. But when you say, ther those lags are same for everyone, then the cause can be rooted somewhere deep in pion implementation.

This reverts commit 646e8af.
@m1k1o
Copy link
Owner

m1k1o commented Feb 15, 2021

In fact, it is even simpler. Because those pipelines are already restarting on screen size change. Meaning, we can just pass fps directly to the pipeline and it should work.

@m1k1o
Copy link
Owner

m1k1o commented Feb 15, 2021

I wan not able to replicate that reconnectino faliure again. Maybe removing that piece of code helped. But there is something new:

image

After many attempts for reconnection, there is eventually some race.

@m1k1o
Copy link
Owner

m1k1o commented Feb 15, 2021

Hopefully WebRTC timeouts fixed that problem. It is not reproducible for me. Merging this PR.

@m1k1o m1k1o merged commit 1834186 into m1k1o:dev Feb 15, 2021
@mbattista
Copy link
Contributor Author

Thanks for cleaning up and improving the PR this much.

I will try to tackle the lags again and make another PR if I find a solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[ENHANCEMENT] Better/Smoother playback from rtc stream
2 participants