Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible explanation for difference in regularity of packets received from a small latency vs large latency buffer? #2911

Open
oviano opened this issue Mar 9, 2024 · 38 comments
Labels
Type: Question Questions or things that require clarification
Milestone

Comments

@oviano
Copy link
Contributor

oviano commented Mar 9, 2024

I am streaming from a custom server application to a custom player, and the following scenario is over the LAN:

Latency set to 2000ms - no packets lost, video and audio packets decoded and playback is smooth
Latency set to 100ms - no packets lost, video and audio packets decoded but playback is not smooth
Latency set to 200ms - no packets lost, video and audio packets decoded but playback is better than with a 100ms buffer but still occasional stutter

The reason it is not smooth seems to be that from time-to-time some packets are received from SRT either earlier or later (I'm not sure which, yet) when using the small buffer so my video player has to adjust to this and the result is stutter.

This probably isn't a new thing; lately I've been trying to see how small I get end-to-end server-to-player latency. Previously I had a 100ms "jitter" buffer in my video player code, which likely concealed this oddity. Having removed this buffer, and trying to feed packets more directly into my video player code in order to reduce latency, I now see this issue - but only with a small buffer, and that's the bit I don't understand.

This was observed on an Apple TV. The server is Linux Jetson (ARM).

@oviano oviano added the Type: Question Questions or things that require clarification label Mar 9, 2024
@ethouris
Copy link
Collaborator

The only way to see what is happening here is to grab the time at the moment of scheduling a packet (let's say the automatic time pickup is in use) and the same times at the receiver side when the packet is given up to the application (there's the PLAYING phrase in the debug log). This could be then put into a spreadsheet with getting time comparison between subsequent packets. These differences between the same packets (that can be identified by sequence numbers) should be very much the same. If there are any visible differences, then logs around this value should be found and checked what ingredients were used to calculate it (these details should be in the debug logs).

Anyway, the whole idea of TSBPD and the live mode is that the time difference between two subsequent packets should be exactly the same on the sender (taking the application-send time) and the receiver (taking the time when the receiver application gets the packet). If it's not, then there's something wrong.

If you could help a bit more, you may try also the same with some older version, like the last from 1.3 line.

@oviano
Copy link
Contributor Author

oviano commented Mar 10, 2024

Thanks! I'm busy investigating this at the moment in a similar manner to how you describe, although using the PTS of my custom payload representing audio/video packets to identify them.

When my latency is 100ms, SRT seems to be occasionally giving up packets to the application that have arrived more than 100ms later than expected, rather than dropping them. Typical pattern is a long gap (>100ms) then a quick succession of packets as it catches up. Is there some element of tolerance here? Or is it possible that let's say I'm not reading packets fast enough from SRT that this could lead to it giving up packets late that ought to have been dropped?

The fact that this issue reduces and goes away as I increase latency suggests these are packets that are meant to have been dropped, but for some reason aren't being.

I still need to verify the timings on the server so I'm still figuring this out.

@oviano
Copy link
Contributor Author

oviano commented Mar 10, 2024

So this is the high level overview. As you can see the intervals between calls to SRT send on the server are fairly constant, which I would expect from the encoder output.

Yet, the intervals between the player receives are far more variable. This particular sample doesn't break the player, but the variations can be more extreme such the that player is starved of packets and then gets a load in one go and has to adjust to this (=stutter).

Interestingly (or not), I get the same pattern as below whether the server is Windows or Linux (Jetson ARM), and whether the player is tvOS or macOS.

Next stage is to investigate the SRT logs, I suppose.

INTERVALS BETWEEN SERVER SEND

*** VIDEO DTS 23433519574 INT 20366us ***
*** VIDEO DTS 23433539574 INT 19659us ***
*** VIDEO DTS 23433559574 INT 20206us ***
*** VIDEO DTS 23433579574 INT 19610us ***
*** VIDEO DTS 23433599574 INT 20301us ***
*** VIDEO DTS 23433619574 INT 19763us ***
*** VIDEO DTS 23433639574 INT 20202us ***
*** VIDEO DTS 23433659574 INT 19836us ***
*** VIDEO DTS 23433679574 INT 20221us ***
*** VIDEO DTS 23433699574 INT 19608us ***
*** VIDEO DTS 23433719574 INT 20623us ***
*** VIDEO DTS 23433739574 INT 19553us ***
*** VIDEO DTS 23433759574 INT 20323us ***
*** VIDEO DTS 23433779574 INT 19653us ***
*** VIDEO DTS 23433799574 INT 20177us ***
*** VIDEO DTS 23433819574 INT 19674us ***
*** VIDEO DTS 23433839574 INT 20332us ***
*** VIDEO DTS 23433859574 INT 19856us ***
*** VIDEO DTS 23433879574 INT 20496us ***
*** VIDEO DTS 23433899574 INT 19802us ***

INTERVALS BETWEEN PLAYER RECEIVE

*** VIDEO DTS 23433519574 INT 24765us ***
*** VIDEO DTS 23433539574 INT 22273us ***
*** VIDEO DTS 23433559574 INT 25358us ***
*** VIDEO DTS 23433579574 INT 12789us ***
*** VIDEO DTS 23433599574 INT 23753us ***
*** VIDEO DTS 23433619574 INT 10677us ***
*** VIDEO DTS 23433639574 INT 23835us ***
*** VIDEO DTS 23433659574 INT 25037us ***
*** VIDEO DTS 23433679574 INT 23166us ***
*** VIDEO DTS 23433699574 INT 8323us ***
*** VIDEO DTS 23433719574 INT 21857us ***
*** VIDEO DTS 23433739574 INT 22790us ***
*** VIDEO DTS 23433759574 INT 25595us ***
*** VIDEO DTS 23433779574 INT 13046us ***
*** VIDEO DTS 23433799574 INT 23851us ***
*** VIDEO DTS 23433819574 INT 12400us ***
*** VIDEO DTS 23433839574 INT 25321us ***
*** VIDEO DTS 23433859574 INT 24835us ***
*** VIDEO DTS 23433879574 INT 9566us ***
*** VIDEO DTS 23433899574 INT 19878us ***

@maxsharabayko maxsharabayko added this to the v1.5.4 milestone Mar 11, 2024
@ethouris
Copy link
Collaborator

ethouris commented Mar 11, 2024

Yeah. Just note that it is crucial to set the latency to a value that doesn't conflict with the current RTT. The RTT measurement is also visible in the logs, but is reported in the stats.

SRT only tries to add a delay on a received data packet, if it has arrived EARLIER than its play time, but when this packet arrived later than this time, it's still delivered. SRT doesn't drop packets that have arrived. It drops only those packet that haven't arrived at the time when the NEXT packet is ready to play.

Note that RTT should give you an idea of how big the STT (single trip time) is, or in general the "sending delay" from one machine to another. This document explains the details.

The latency can be just as well set to 0 because the delay of the first packet is included in the formula. But if you set it to 0, you must be somehow certain that every next packet will be delayed in the sending process over the network exactly the same as the very first packet, or at least not more than that (and obviously you don't lose packets, as with latency 0 every lost packet will be dropped). It's then not exactly the overall value of RTT that causes the problem, but it's variance. The recommendation for setting the latency in a function of measured RTT is not such due to the value of RTT, but by stating that the higher the RTT, the more it can potentially vary.

@oviano
Copy link
Contributor Author

oviano commented Mar 11, 2024

Hmm thanks for the explanation.

In this case, this is over the LAN, so the latency of 100ms shouldn't be an issue.

I'm working on producing some (heavy) logs.

@ethouris
Copy link
Collaborator

Well, I have been trying this out also in the LAN (that's my first-hand environment for testing) and I can only tell you that the variance of the packet delivery time can be surprisingly high. I've learned it the hard way when developing the backup-type groups and counted on that ACK, which is sent every 10ms, can be delayed for up to 20ms, or even 50ms. It's LAN after all. After first tests I learned that I couldn't be more wrong.

@oviano
Copy link
Contributor Author

oviano commented Mar 11, 2024

A slight tangent - the way my application works is it initially opens up 4 x SRT connections to the server, each of which is intended for a different profile/latency. This is to facilitate rapid switching between them. Only one is active at a time, the other 3 remain idle.

As an experiment/attempt to strip things down to the minimum, I just disabled this and made it only open one connection.

Take a look at the difference. I suppose this could be overhead from the additional threads and context switching (although, this is on a pretty powerful M3 Pro MacBook Pro) or is there some sort of contention between SRT connections that I'm overlooking?

1 x ACTIVE SRT CONNECTION, 3 x IDLE

*** VIDEO DTS 64243639662 INT 25330us ***
*** VIDEO DTS 64243659662 INT 13042us ***
*** VIDEO DTS 64243679662 INT 25196us ***
*** VIDEO DTS 64243699662 INT 23867us ***
*** VIDEO DTS 64243719662 INT 7573us ***
*** VIDEO DTS 64243739662 INT 25727us ***
*** VIDEO DTS 64243759662 INT 23692us ***
*** VIDEO DTS 64243779662 INT 11921us ***
*** VIDEO DTS 64243799662 INT 21866us ***
*** VIDEO DTS 64243819662 INT 25963us ***
*** VIDEO DTS 64243839662 INT 12243us ***
*** VIDEO DTS 64243859662 INT 21750us ***
*** VIDEO DTS 64243879662 INT 25030us ***
*** VIDEO DTS 64243899662 INT 12939us ***
*** VIDEO DTS 64243919662 INT 21995us ***
*** VIDEO DTS 64243939662 INT 25022us ***
*** VIDEO DTS 64243959662 INT 25715us ***
*** VIDEO DTS 64243979662 INT 13418us ***
*** VIDEO DTS 64243999662 INT 25696us ***
*** VIDEO DTS 64244019662 INT 12148us ***
*** VIDEO DTS 64244039662 INT 13882us ***
*** VIDEO DTS 64244059662 INT 36185us ***
*** VIDEO DTS 64244079662 INT 13159us ***
*** VIDEO DTS 64244099662 INT 12594us ***
*** VIDEO DTS 64244119662 INT 27388us ***
*** VIDEO DTS 64244139662 INT 12517us ***
*** VIDEO DTS 64244159662 INT 21777us ***
*** VIDEO DTS 64244179662 INT 19401us ***
*** VIDEO DTS 64244199662 INT 26891us ***
*** VIDEO DTS 64244219662 INT 13353us ***
*** VIDEO DTS 64244239662 INT 20014us ***
*** VIDEO DTS 64244259662 INT 27137us ***
*** VIDEO DTS 64244279662 INT 11276us ***
*** VIDEO DTS 64244299662 INT 27346us ***
*** VIDEO DTS 64244319662 INT 13045us ***
*** VIDEO DTS 64244339662 INT 24614us ***
*** VIDEO DTS 64244359662 INT 26833us ***
*** VIDEO DTS 64244379662 INT 6270us ***
*** VIDEO DTS 64244399662 INT 23543us ***
*** VIDEO DTS 64244419662 INT 27231us ***
*** VIDEO DTS 64244439662 INT 11248us ***
*** VIDEO DTS 64244459662 INT 23888us ***
*** VIDEO DTS 64244479662 INT 27049us ***
*** VIDEO DTS 64244499662 INT 13245us ***
*** VIDEO DTS 64244519662 INT 25286us ***
*** VIDEO DTS 64244539662 INT 12450us ***
*** VIDEO DTS 64244559662 INT 26950us ***
*** VIDEO DTS 64244579662 INT 10782us ***
*** VIDEO DTS 64244599662 INT 26583us ***
*** VIDEO DTS 64244619662 INT 26074us ***

1 x ACTIVE SRT CONNECTION

*** VIDEO DTS 64383595620 INT 20196us ***
*** VIDEO DTS 64383615620 INT 19904us ***
*** VIDEO DTS 64383635620 INT 20100us ***
*** VIDEO DTS 64383655620 INT 19420us ***
*** VIDEO DTS 64383675620 INT 20210us ***
*** VIDEO DTS 64383695620 INT 19812us ***
*** VIDEO DTS 64383715620 INT 20780us ***
*** VIDEO DTS 64383735620 INT 18784us ***
*** VIDEO DTS 64383755620 INT 20828us ***
*** VIDEO DTS 64383775620 INT 18769us ***
*** VIDEO DTS 64383795620 INT 17670us ***
*** VIDEO DTS 64383815620 INT 24457us ***
*** VIDEO DTS 64383835620 INT 18926us ***
*** VIDEO DTS 64383855620 INT 20926us ***
*** VIDEO DTS 64383875620 INT 19779us ***
*** VIDEO DTS 64383895620 INT 19746us ***
*** VIDEO DTS 64383915620 INT 20457us ***
*** VIDEO DTS 64383935620 INT 16381us ***
*** VIDEO DTS 64383955620 INT 24041us ***
*** VIDEO DTS 64383975620 INT 18499us ***
*** VIDEO DTS 64383995620 INT 20594us ***
*** VIDEO DTS 64384015620 INT 19158us ***
*** VIDEO DTS 64384035620 INT 16826us ***
*** VIDEO DTS 64384055620 INT 24187us ***
*** VIDEO DTS 64384075620 INT 17506us ***
*** VIDEO DTS 64384095620 INT 22636us ***
*** VIDEO DTS 64384115620 INT 19008us ***
*** VIDEO DTS 64384135620 INT 16566us ***
*** VIDEO DTS 64384155620 INT 21345us ***
*** VIDEO DTS 64384175620 INT 18339us ***
*** VIDEO DTS 64384195620 INT 24857us ***
*** VIDEO DTS 64384215620 INT 18459us ***
*** VIDEO DTS 64384235620 INT 17002us ***
*** VIDEO DTS 64384255620 INT 24576us ***
*** VIDEO DTS 64384275620 INT 19082us ***
*** VIDEO DTS 64384295620 INT 19338us ***
*** VIDEO DTS 64384315620 INT 20673us ***
*** VIDEO DTS 64384335620 INT 19598us ***
*** VIDEO DTS 64384355620 INT 20191us ***
*** VIDEO DTS 64384375620 INT 19894us ***
*** VIDEO DTS 64384395620 INT 20089us ***
*** VIDEO DTS 64384415620 INT 20125us ***
*** VIDEO DTS 64384435620 INT 20875us ***
*** VIDEO DTS 64384455620 INT 19231us ***
*** VIDEO DTS 64384475620 INT 20335us ***
*** VIDEO DTS 64384495620 INT 19450us ***
*** VIDEO DTS 64384515620 INT 16689us ***
*** VIDEO DTS 64384535620 INT 24439us ***
*** VIDEO DTS 64384555620 INT 18961us ***

@oviano
Copy link
Contributor Author

oviano commented Mar 11, 2024

So this player-side log has some examples of a few very high "belated" PLAYING PACKET lines.

By the way, I cannot enable heavy logging on the server as it slows down everything too much (it's relatively low-powered, Jetson ARM). However, I know that I am submitting packets regularly to SRT on the server.

For example

11/03/24 11:12:11.999637 [srt] DEBUG : 11:12:11.999625/T0x171797000 D:SRT.ts: @143498908: tsbpd: PLAYING PACKET seq=90071056 (belated 305ms)

srt.log

@ethouris
Copy link
Collaborator

Yeah, so this can explain things. But then, please find the line that shows this same packet's times at the moment of reception. This is the log containing the RECEIVED phrase and it should show the ETS value. The value is in a different clock space than the log clock, but it can at least show you if the packet has already arrived after its expected arrival time or not.

@ethouris
Copy link
Collaborator

This is this log:

       HLOGC(qrlog.Debug, log << CONID() << "processData: RECEIVED DATA: size=" << packet.getLength()
           << " seq=" << packet.getSeqNo()
           // XXX FIX IT. OTS should represent the original sending time, but it's relative.
           //<< " OTS=" << FormatTime(packet.getMsgTimeStamp())
           << " ETS=" << FormatTime(ets)
           << " PTS=" << FormatTime(pts));

(And might be that you can restore this OTS, I'm just not sure whether it's the right time that should be displayed).

@oviano
Copy link
Contributor Author

oviano commented Mar 11, 2024

So I'm seeing two things.

  1. irregularity of all packets received, when there are multiple SRT connections. Reducing to one connection makes things a lot more consistent.
  2. occasional crazy-late PLAYING PACKET, eg. 305ms in the log above. This still happens even when there is only one SRT connection.

@ethouris
Copy link
Collaborator

Yeah, and you should check all other logs that mention the packet with the same sequence number. This should shed some light on when exactly this packet has arrived over the network and what is the cause of it being so delayed.

NOTE: AFAIK the current code is still using the old manner of ACK-eclipsing, that is, packets stay unavailable for the reader, even if they are ready to play, if ACK position hasn't moved behind them. I have changed that behavior in the #2527 PR. It's highly experimental and Max has already checked that it suffers of performance problems (so there's still a lot to be fixed there), but you might try this out whether it might do any better job.

@oviano
Copy link
Contributor Author

oviano commented Mar 11, 2024

So taking the packet that was 305ms late, sequence number 90071056, it appears in the log in the places below.

So is the bottom line basically that a range of packets just didn't arrive, and then all arrived in quick succession?

11/03/24 11:12:11.635826 [srt] DEBUG : 11:12:11.635823/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.635837 [srt] DEBUG : 11:12:11.635833/T0x17170b000 D:SRT.xt: @143498908: sendCtrlAck: %90071054 -> %90071056 (2 packets)
11/03/24 11:12:11.635845 [srt] DEBUG : 11:12:11.635842/T0x17170b000 D:SRT.xt: @143498908: ACK: clip %90071056-%90071056, REVOKED 0 from RCV buffer
11/03/24 11:12:11.635859 [srt] DEBUG : 11:12:11.635853/T0x17170b000 D:SRT.in: pack: type=ack ARG=791 [ 90071056 2534 222 25594 41 2 3586 ]
11/03/24 11:12:11.635877 [srt] DEBUG : 11:12:11.635865/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=15824530 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x317 791 [ 90071056 2534 222 25594 41 2 3586 ]
11/03/24 11:12:11.635950 [srt] DEBUG : 11:12:11.635945/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (3): 90071054 90071055 90071056
11/03/24 11:12:11.647081 [srt] DEBUG : 11:12:11.647078/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.647093 [srt] DEBUG : 11:12:11.647088/T0x17170b000 D:SRT.in: pack: type=ack ARG=792 [ 90071056 2534 222 25596 41 2 3586 ]
11/03/24 11:12:11.647112 [srt] DEBUG : 11:12:11.647099/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=15835764 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x318 792 [ 90071056 2534 222 25596 41 2 3586 ]
11/03/24 11:12:11.647200 [srt] DEBUG : 11:12:11.647194/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.658320 [srt] DEBUG : 11:12:11.658317/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.658333 [srt] DEBUG : 11:12:11.658327/T0x17170b000 D:SRT.in: pack: type=ack ARG=793 [ 90071056 2534 222 25597 41 2 3586 ]
11/03/24 11:12:11.658351 [srt] DEBUG : 11:12:11.658339/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=15847004 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x319 793 [ 90071056 2534 222 25597 41 2 3586 ]
11/03/24 11:12:11.658446 [srt] DEBUG : 11:12:11.658442/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.669582 [srt] DEBUG : 11:12:11.669579/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.669598 [srt] DEBUG : 11:12:11.669590/T0x17170b000 D:SRT.in: pack: type=ack ARG=794 [ 90071056 2534 222 25597 41 2 3586 ]
11/03/24 11:12:11.669621 [srt] DEBUG : 11:12:11.669606/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=15858271 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x31a 794 [ 90071056 2534 222 25597 41 2 3586 ]
11/03/24 11:12:11.669733 [srt] DEBUG : 11:12:11.669728/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.680874 [srt] DEBUG : 11:12:11.680871/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.680890 [srt] DEBUG : 11:12:11.680884/T0x17170b000 D:SRT.in: pack: type=ack ARG=795 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.680909 [srt] DEBUG : 11:12:11.680896/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=15869561 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x31b 795 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.681011 [srt] DEBUG : 11:12:11.681005/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.692203 [srt] DEBUG : 11:12:11.692199/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.692218 [srt] DEBUG : 11:12:11.692211/T0x17170b000 D:SRT.in: pack: type=ack ARG=796 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.692246 [srt] DEBUG : 11:12:11.692228/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=15880892 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x31c 796 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.692388 [srt] DEBUG : 11:12:11.692381/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.703649 [srt] DEBUG : 11:12:11.703645/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.703670 [srt] DEBUG : 11:12:11.703661/T0x17170b000 D:SRT.in: pack: type=ack ARG=797 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.703701 [srt] DEBUG : 11:12:11.703679/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=15892344 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x31d 797 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.703865 [srt] DEBUG : 11:12:11.703858/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.715136 [srt] DEBUG : 11:12:11.715131/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.715176 [srt] DEBUG : 11:12:11.715164/T0x17170b000 D:SRT.in: pack: type=ack ARG=798 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.715220 [srt] DEBUG : 11:12:11.715186/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=15903851 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x31e 798 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.715346 [srt] DEBUG : 11:12:11.715339/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.726542 [srt] DEBUG : 11:12:11.726539/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.726561 [srt] DEBUG : 11:12:11.726554/T0x17170b000 D:SRT.in: pack: type=ack ARG=799 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.726589 [srt] DEBUG : 11:12:11.726569/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=15915234 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x31f 799 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.726741 [srt] DEBUG : 11:12:11.726734/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.737971 [srt] DEBUG : 11:12:11.737967/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.737998 [srt] DEBUG : 11:12:11.737989/T0x17170b000 D:SRT.in: pack: type=ack ARG=800 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.738031 [srt] DEBUG : 11:12:11.738008/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=15926673 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x320 800 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.738267 [srt] DEBUG : 11:12:11.738253/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.749473 [srt] DEBUG : 11:12:11.749470/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.749488 [srt] DEBUG : 11:12:11.749482/T0x17170b000 D:SRT.in: pack: type=ack ARG=801 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.749506 [srt] DEBUG : 11:12:11.749494/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=15938158 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x321 801 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.749595 [srt] DEBUG : 11:12:11.749589/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.760837 [srt] DEBUG : 11:12:11.760834/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.760856 [srt] DEBUG : 11:12:11.760848/T0x17170b000 D:SRT.in: pack: type=ack ARG=802 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.760891 [srt] DEBUG : 11:12:11.760864/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=15949529 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x322 802 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.761014 [srt] DEBUG : 11:12:11.761010/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.772216 [srt] DEBUG : 11:12:11.772213/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.772235 [srt] DEBUG : 11:12:11.772228/T0x17170b000 D:SRT.in: pack: type=ack ARG=803 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.772263 [srt] DEBUG : 11:12:11.772243/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=15960908 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x323 803 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.772429 [srt] DEBUG : 11:12:11.772408/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.783581 [srt] DEBUG : 11:12:11.783579/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.783596 [srt] DEBUG : 11:12:11.783590/T0x17170b000 D:SRT.in: pack: type=ack ARG=804 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.783614 [srt] DEBUG : 11:12:11.783601/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=15972267 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x324 804 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.783696 [srt] DEBUG : 11:12:11.783691/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.794904 [srt] DEBUG : 11:12:11.794901/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.794921 [srt] DEBUG : 11:12:11.794914/T0x17170b000 D:SRT.in: pack: type=ack ARG=805 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.794946 [srt] DEBUG : 11:12:11.794929/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=15983594 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x325 805 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.795083 [srt] DEBUG : 11:12:11.795078/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.806334 [srt] DEBUG : 11:12:11.806330/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.806353 [srt] DEBUG : 11:12:11.806346/T0x17170b000 D:SRT.in: pack: type=ack ARG=806 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.806374 [srt] DEBUG : 11:12:11.806360/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=15995025 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x326 806 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.806511 [srt] DEBUG : 11:12:11.806495/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.817728 [srt] DEBUG : 11:12:11.817725/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.817745 [srt] DEBUG : 11:12:11.817738/T0x17170b000 D:SRT.in: pack: type=ack ARG=807 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.817787 [srt] DEBUG : 11:12:11.817757/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=16006422 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x327 807 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.817915 [srt] DEBUG : 11:12:11.817907/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.829121 [srt] DEBUG : 11:12:11.829118/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.829155 [srt] DEBUG : 11:12:11.829140/T0x17170b000 D:SRT.in: pack: type=ack ARG=808 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.829179 [srt] DEBUG : 11:12:11.829165/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=16017830 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x328 808 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.829317 [srt] DEBUG : 11:12:11.829307/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.840482 [srt] DEBUG : 11:12:11.840480/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.840494 [srt] DEBUG : 11:12:11.840489/T0x17170b000 D:SRT.in: pack: type=ack ARG=809 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.840513 [srt] DEBUG : 11:12:11.840499/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=16029164 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x329 809 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.840584 [srt] DEBUG : 11:12:11.840580/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.850803 [srt] DEBUG : 11:12:11.850800/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.850820 [srt] DEBUG : 11:12:11.850814/T0x17170b000 D:SRT.in: pack: type=ack ARG=810 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.850844 [srt] DEBUG : 11:12:11.850827/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=16039492 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x32a 810 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.850913 [srt] DEBUG : 11:12:11.850909/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.862250 [srt] DEBUG : 11:12:11.862244/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.862281 [srt] DEBUG : 11:12:11.862268/T0x17170b000 D:SRT.in: pack: type=ack ARG=811 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.862322 [srt] DEBUG : 11:12:11.862294/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=16050960 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x32b 811 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.862474 [srt] DEBUG : 11:12:11.862465/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.873696 [srt] DEBUG : 11:12:11.873693/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.873719 [srt] DEBUG : 11:12:11.873705/T0x17170b000 D:SRT.in: pack: type=ack ARG=812 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.873741 [srt] DEBUG : 11:12:11.873726/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=16062391 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x32c 812 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.873847 [srt] DEBUG : 11:12:11.873842/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.885082 [srt] DEBUG : 11:12:11.885078/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: past-recv %90071056
11/03/24 11:12:11.885100 [srt] DEBUG : 11:12:11.885093/T0x17170b000 D:SRT.in: pack: type=ack ARG=813 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.885130 [srt] DEBUG : 11:12:11.885108/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=16073773 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x32d 813 [ 90071056 2534 222 25599 41 2 3586 ]
11/03/24 11:12:11.885290 [srt] DEBUG : 11:12:11.885284/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.894622 [srt] DEBUG : 11:12:11.894612/T0x17170b000 D:SRT.qr: @143498908: LOSS STATS: n=24 SEQ: [90071056 90071079]
11/03/24 11:12:11.894712 [srt] DEBUG : 11:12:11.894698/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071080 BUF.s=25599 avail=25599 buffer=(%90071056:%90071055+%90096654) RSL=ACCEPTED SN=ORIGINAL FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_ORIGINAL
11/03/24 11:12:11.894768 [srt] DEBUG : 11:12:11.894760/T0x17170b000 D:SRT.qr: pkt/LOSS DETECTED: %90071056 - %90071079
11/03/24 11:12:11.894798 [srt] DEBUG : 11:12:11.894786/T0x17170b000 D:SRT.qr: @143498908: processData: RECORDING LOSS: [ {90071056 90071079} ] tolerance=0
11/03/24 11:12:11.894830 [srt] DEBUG : 11:12:11.894818/T0x17170b000 D:SRT.qr: @143498908: WILL REPORT LOSSES (SRT): [ {90071056 90071079} ]
11/03/24 11:12:11.894861 [srt] DEBUG : 11:12:11.894850/T0x17170b000 D:SRT.qr: lost packets 90071056-90071079 (24 packets): sending LOSSREPORT
11/03/24 11:12:11.894893 [srt] DEBUG : 11:12:11.894878/T0x17170b000 D:SRT.in: pack: type=lossreport ARG=none [ <90071056> 90071079 ]
11/03/24 11:12:11.894936 [srt] DEBUG : 11:12:11.894912/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=8 pkt.ts=16083542 TARGET=@390323467 CONTROL: size=8 type=lossreport ARG: 0x0 0 [ <90071056> 90071079 ]
11/03/24 11:12:11.895488 [srt] DEBUG : 11:12:11.895478/T0x17170b000 D:SRT.xt: NONCONT-SEQUENCE: first loss %90071056 (loss len=24)
11/03/24 11:12:11.895529 [srt] DEBUG : 11:12:11.895515/T0x17170b000 D:SRT.in: pack: type=ack ARG=814 [ 90071056 2534 222 25599 40 2 3503 ]
11/03/24 11:12:11.895580 [srt] DEBUG : 11:12:11.895550/T0x17170b000 D:SRT.ks: CChannel::sendto: SENDING NOW DST=89.36.67.100:8093 target=@390323467 size=28 pkt.ts=16084215 TARGET=@390323467 CONTROL: size=28 type=ack ARG: 0x32e 814 [ 90071056 2534 222 25599 40 2 3503 ]
11/03/24 11:12:11.895706 [srt] DEBUG : 11:12:11.895700/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (1): 90071056
11/03/24 11:12:11.895875 [srt] DEBUG : 11:12:11.895870/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071081 BUF.s=25599 avail=25599 buffer=(%90071056:%90071080+%90096654) RSL=ACCEPTED SN=ORIGINAL FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_ORIGINAL
11/03/24 11:12:11.899762 [srt] DEBUG : 11:12:11.899704/T0x17170b000 D:SRT.qr: INCOMING PACKET: FROM=89.36.67.100:8093 BOUND=0.0.0.0:0 TARGET=@143498908 DATA: size=45 0D2C3B44 #1256 %90071056 PB_SOLO ORD_RELAXED EK_EVEN SN_REXMIT
11/03/24 11:12:11.899829 [srt] DEBUG : 11:12:11.899807/T0x17170b000 D:SRT.qr: @143498908: processData: RECEIVED DATA: size=45 seq=90071056 ETS=01:37:38.876321625 [STDY] PTS=01:37:38.976321625 [STDY]
11/03/24 11:12:11.899929 [srt] DEBUG : 11:12:11.899922/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071056 BUF.s=25599 avail=25599 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.899958 [srt] DEBUG : 11:12:11.899953/T0x17170b000 D:SRT.qr: @143498908: received reXmitted packet seq=90071056
11/03/24 11:12:11.900227 [srt] DEBUG : 11:12:11.900220/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071057 BUF.s=25599 avail=25599 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.900492 [srt] DEBUG : 11:12:11.900486/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071058 BUF.s=25599 avail=25599 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.900950 [srt] DEBUG : 11:12:11.900936/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071059 BUF.s=25599 avail=25599 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.901487 [srt] DEBUG : 11:12:11.901475/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071060 BUF.s=25599 avail=25599 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.902006 [srt] DEBUG : 11:12:11.901993/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071061 BUF.s=25599 avail=25599 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.902580 [srt] DEBUG : 11:12:11.902567/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071062 BUF.s=25599 avail=25599 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.903092 [srt] DEBUG : 11:12:11.903080/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071063 BUF.s=25599 avail=25599 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.903491 [srt] DEBUG : 11:12:11.903485/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071064 BUF.s=25599 avail=25599 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.903750 [srt] DEBUG : 11:12:11.903745/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071065 BUF.s=25599 avail=25599 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.904016 [srt] DEBUG : 11:12:11.904010/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071066 BUF.s=25599 avail=25599 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.904311 [srt] DEBUG : 11:12:11.904305/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071067 BUF.s=25599 avail=25599 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.904592 [srt] DEBUG : 11:12:11.904586/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071068 BUF.s=25599 avail=25599 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.904856 [srt] DEBUG : 11:12:11.904850/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071069 BUF.s=25599 avail=25599 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.905152 [srt] DEBUG : 11:12:11.905138/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071070 BUF.s=25599 avail=25599 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.905467 [srt] DEBUG : 11:12:11.905461/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071071 BUF.s=25599 avail=25599 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.905589 [srt] DEBUG : 11:12:11.905582/T0x17170b000 D:SRT.xt: @143498908: sendCtrlAck: %90071056 -> %90071072 (16 packets)
11/03/24 11:12:11.905788 [srt] DEBUG : 11:12:11.905779/T0x17170b000 D:SRT.xt: @143498908: sendCtrl(UMSG_ACK): ACK (17): 90071056 90071057 90071058 90071059 90071060 90071061 90071062 90071063 90071064 90071065 90071066 90071067 90071068 90071069 90071070 90071071 90071072
11/03/24 11:12:11.905984 [srt] DEBUG : 11:12:11.905976/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071072 BUF.s=25599 avail=25583 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.906334 [srt] DEBUG : 11:12:11.906328/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071073 BUF.s=25599 avail=25583 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.906624 [srt] DEBUG : 11:12:11.906610/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071074 BUF.s=25599 avail=25583 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.906952 [srt] DEBUG : 11:12:11.906946/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071075 BUF.s=25599 avail=25583 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.907280 [srt] DEBUG : 11:12:11.907274/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071076 BUF.s=25599 avail=25583 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.907563 [srt] DEBUG : 11:12:11.907556/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071077 BUF.s=25599 avail=25583 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.907826 [srt] DEBUG : 11:12:11.907820/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071078 BUF.s=25599 avail=25583 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.908089 [srt] DEBUG : 11:12:11.908082/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071079 BUF.s=25599 avail=25583 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=REXMITTED FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_REXMIT
11/03/24 11:12:11.914281 [srt] DEBUG : 11:12:11.914274/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071082 BUF.s=25599 avail=25583 buffer=(%90071056:%90071081+%90096654) RSL=ACCEPTED SN=ORIGINAL FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_ORIGINAL
11/03/24 11:12:11.914609 [srt] DEBUG : 11:12:11.914601/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071083 BUF.s=25599 avail=25583 buffer=(%90071056:%90071082+%90096654) RSL=ACCEPTED SN=ORIGINAL FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_ORIGINAL
11/03/24 11:12:11.934585 [srt] DEBUG : 11:12:11.934571/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071084 BUF.s=25599 avail=25571 buffer=(%90071056:%90071083+%90096654) RSL=ACCEPTED SN=ORIGINAL FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_ORIGINAL
11/03/24 11:12:11.954461 [srt] DEBUG : 11:12:11.954448/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071085 BUF.s=25599 avail=25570 buffer=(%90071056:%90071084+%90096654) RSL=ACCEPTED SN=ORIGINAL FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_ORIGINAL
11/03/24 11:12:11.954864 [srt] DEBUG : 11:12:11.954855/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071086 BUF.s=25599 avail=25570 buffer=(%90071056:%90071085+%90096654) RSL=ACCEPTED SN=ORIGINAL FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_ORIGINAL
11/03/24 11:12:11.974565 [srt] DEBUG : 11:12:11.974560/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071087 BUF.s=25599 avail=25568 buffer=(%90071056:%90071086+%90096654) RSL=ACCEPTED SN=ORIGINAL FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_ORIGINAL
11/03/24 11:12:11.974773 [srt] DEBUG : 11:12:11.974769/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071088 BUF.s=25599 avail=25568 buffer=(%90071056:%90071087+%90096654) RSL=ACCEPTED SN=ORIGINAL FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_ORIGINAL
11/03/24 11:12:11.994579 [srt] DEBUG : 11:12:11.994565/T0x17170b000 D:SRT.qr: @143498908: RECEIVED: %90071089 BUF.s=25599 avail=25566 buffer=(%90071056:%90071088+%90096654) RSL=ACCEPTED SN=ORIGINAL FLAGS: PB_SOLO ORD_RELAXED EK_NOENC SN_ORIGINAL
11/03/24 11:12:11.999637 [srt] DEBUG : 11:12:11.999625/T0x171797000 D:SRT.ts: @143498908: tsbpd: PLAYING PACKET seq=90071056 (belated 305ms)

@ethouris
Copy link
Collaborator

Ok, most of the lines that appear in this log match the sequence number of the packet because it was occasionally a sequence of the first lost packet and hence the first "no packet" cell and so the ACK number, as well as the first sequence in the receiver buffer.

The packet has been physically received around the moment reported by the log "INCOMING PACKET". The distance between the log reports of this one and the "PLAYING PACKET" is about 100ms. This means that if it was already belated 305ms at this last report, it means that it has definitely come already 205ms after its play time (which might have happened in theory as it was retransmitted). It was fortunate to not get dropped only because the first readable packet in the buffer (sequence %90071080) had likely some later play time than this, otherwise this packet - and all up to %90071079 - would have been already dropped to allow this one to play. And still, with a delay required to properly switch threads at the time to TSBPD so that it picks up the play time of the next packet to play.

An unusual thing what I can see here is why the TSBPD thread wasn't woken up earlier anyway. Might be that somehow the receiver thread was treated as a high priority as getting updates from the system with the newly received packets, or has been even kept for too long time frozen, as the system decided to collect-and-burst freshly received packets as they came in as retransmitted and therefore in a burst.

You may still try out the version in that PR I showed you above. It is of poor performance as for now, but it is designed to get updates from the newly received packets directly to TSBPD and avoid most of the spurious wakeups of this thread.

@oviano
Copy link
Contributor Author

oviano commented Mar 11, 2024

Ok, thank you for that, it all makes sense.

I think it might be the state of my ethernet wiring; I just connected the Apple TV to the router using Powerline (G.hn) technology instead and it seems much more solid so far.

I still need to workaround the problems I'm having with multiple SRT connections though. Are there global locks or anything that might mean separate SRT connections affect one another, or is it likely to be just a problem with having too many threads, context switches?

@ethouris
Copy link
Collaborator

Well, that depends on many conditions, such as:

  • you have multiple SRT connection just inside the machine or already in one single application
  • you host callers with default-auto-assigned port or a listener (listeners always use a common multiplexer)

Locks are done for various shared objects inside, but the only global lock is happening when dispatching a socket by its ID, which can be only a bit more loaded when you are processing a connection at the moment or configuring a socket. However, don't underestimate also that ACK-signoff problem. I don't exactly know how much it disturbs, but preferring ACK state to play time is at least quite a big theoretical possibility.

@oviano
Copy link
Contributor Author

oviano commented Mar 11, 2024

Multiple SRT connections within one application, and no other SRT connections on other applications.

Connections connect out on the default auto-assigned port. Connections made once at the start, then held with one receiving data from the server, the others doing nothing (server not sending anything, player just sitting on a poll).

What do you mean re: "ACK-signoff problem"?

@ethouris
Copy link
Collaborator

Then this ACK-signoff problem is the only things that comes to my mind. That's why you should really try it.

@oviano
Copy link
Contributor Author

oviano commented Mar 11, 2024

Ah ok you mean the PR, so avoiding spurious wake-ups might help then.

@ethouris
Copy link
Collaborator

Yeah. I don't know if it will help much, but if you have a problem like that it's worth a try.

@oviano
Copy link
Contributor Author

oviano commented Mar 12, 2024

The other thing I’m pondering is the way a complete packet , likely comprised of multiple SRT packets is sent and received, specifically the timings of each packet.

Really one wants to consider a frame packet as a discrete entity that is sent at time X and is available to play at time Y.

In reality, each SRT packet will have a timestamp of X, X+something, X+something more etc.

On the receive, the complete assembled packet will only be available when all SRT packets have been received, thus at time X+whatever.

I’m curious as to what would happen if all sub packets are given the same timestamp X and whether that would result in less fragmented sleep/wake/sleep on the player side. Obviously that would mean sending some SRT packets at a time that has already passed, so I’m not sure of the effect of that and whether it’s a good or bad idea.

Alternatively, one could only pass on a completed video packet to the player when the first SRT packet of the following video frame packet has been received, thus aligning to packet boundaries, but that wouldn't help with any excessive sleep/wake/sleep activity, only with "smoothing" packet delivery to the player.

@oviano
Copy link
Contributor Author

oviano commented Mar 12, 2024

I tried the above and it didn't make any noticeable difference.

Here's something interesting though; I'm testing on iOS today, just a single SRT connection.

I get these intervals between video packets:

STREAM_CONNECTION: Video packet interval is 23039us
STREAM_CONNECTION: Video packet interval is 18263us
STREAM_CONNECTION: Video packet interval is 21791us
STREAM_CONNECTION: Video packet interval is 19993us
STREAM_CONNECTION: Video packet interval is 16865us
STREAM_CONNECTION: Video packet interval is 22627us
STREAM_CONNECTION: Video packet interval is 17469us
STREAM_CONNECTION: Video packet interval is 19373us
STREAM_CONNECTION: Video packet interval is 23959us
STREAM_CONNECTION: Video packet interval is 15443us
STREAM_CONNECTION: Video packet interval is 24810us
STREAM_CONNECTION: Video packet interval is 17466us
STREAM_CONNECTION: Video packet interval is 18148us
STREAM_CONNECTION: Video packet interval is 23851us
STREAM_CONNECTION: Video packet interval is 16186us
STREAM_CONNECTION: Video packet interval is 21334us
STREAM_CONNECTION: Video packet interval is 19002us
STREAM_CONNECTION: Video packet interval is 20563us
STREAM_CONNECTION: Video packet interval is 22636us
STREAM_CONNECTION: Video packet interval is 19924us

So some variance, but not too bad.

But if I replace this line in SRT...

tsbpd_cc.wait_until(tsNextDelivery);

...with this instead...

while (steady_clock::now() < tsNextDelivery) { }

I get these results:

STREAM_CONNECTION: Video packet interval is 19312us
STREAM_CONNECTION: Video packet interval is 19915us
STREAM_CONNECTION: Video packet interval is 20979us
STREAM_CONNECTION: Video packet interval is 18990us
STREAM_CONNECTION: Video packet interval is 20540us
STREAM_CONNECTION: Video packet interval is 20453us
STREAM_CONNECTION: Video packet interval is 18966us
STREAM_CONNECTION: Video packet interval is 20630us
STREAM_CONNECTION: Video packet interval is 20329us
STREAM_CONNECTION: Video packet interval is 19721us
STREAM_CONNECTION: Video packet interval is 19937us
STREAM_CONNECTION: Video packet interval is 20247us
STREAM_CONNECTION: Video packet interval is 19087us
STREAM_CONNECTION: Video packet interval is 20019us
STREAM_CONNECTION: Video packet interval is 20756us
STREAM_CONNECTION: Video packet interval is 19744us
STREAM_CONNECTION: Video packet interval is 19450us
STREAM_CONNECTION: Video packet interval is 20573us
STREAM_CONNECTION: Video packet interval is 19789us
STREAM_CONNECTION: Video packet interval is 19525us

Which show noticeably less variance.

Obviously there are CPU considerations to running a tight loop like that, but it's interesting to see that there might be potential for a more timely release of packets in SRT in some situations, perhaps something along the lines of the USE_BUSY_WAITING that is already an option on the server.

@oviano
Copy link
Contributor Author

oviano commented Mar 13, 2024

I found that simply looping and sleeping for half the remaining time produces the same improved regularity of packet release, while preserving the same CPU usage.

            steady_clock::time_point cur_tp = steady_clock::now();
            while (cur_tp < tsNextDelivery)
            {
                tsbpd_cc.wait_for((tsNextDelivery - cur_tp) / 2);
                cur_tp = steady_clock::now();
            }

Has any analysis been done on this before? i.e. whether in fact packets are released in a timely manner?

@ethouris
Copy link
Collaborator

Well, sounds reasonable. @maxsharabayko ?

@maxsharabayko
Copy link
Collaborator

@oviano

Really one wants to consider a frame packet as a discrete entity that is sent at time X and is available to play at time Y.
In reality, each SRT packet will have a timestamp of X, X+something, X+something more etc.
On the receive, the complete assembled packet will only be available when all SRT packets have been received, thus at time X+whatever.

This is where the Source Time feature (SRT_MSGCTRL::srctime) should be used. You should consider submitting all the packets that belong to a frame with the same timestamp as soon as possible to SRT. If you wat to avoid spikes in sending, the SRTO_MAXBW option controls the pace of sending over UDP.

I found that simply looping and sleeping for half the remaining time produces the same improved regularity of packet release, while preserving the same CPU usage.

This is in a way OS and hardware related, and stream bitrate related. See below tests for the BUSY_WAITING feature in PR #678. See also #936. Waiting for half the time would not be a universal solution.

CTimer::sleepto() accuracy in CV-based waiting mode. Each column shows the actual sleeping time in microseconds for each platform. Measured using unit test CTimer.SleeptoAccuracy.

Time, μs Win 10 CentOS 7 MacBook Pro iOS Android
1 64 49 1
5 209 58 10
10 354 62 17
50 1181 99 68
100 1536 150 140
250 1559 315 389
500 1469 582 738
1000 1593 1080 1346
5000 5506 5110 6118
10000 10717 10112 11963

@oviano You can measure timer accuracy on your playback device using SyncEvent.WaitFor unit test.

@oviano
Copy link
Contributor Author

oviano commented Mar 13, 2024

Thanks for that.

Actually my first thought was to try the same/similiar code used for USE_BUSY_WAITING on the send side for this too, so I more or less copy and pasted that code for this but somewhat counterintuitively it did not have an effect, packets were still delivered irregularly on macOS at least.

I’ll continue to investigate given the info in your post.

@oviano
Copy link
Contributor Author

oviano commented Mar 13, 2024

By the way, that chart shows exactly why I saw a benefit with the crude "sleep in a loop for half the time remaining" approach for Apple platforms as it would reduce the "over sleep" each time because as it gets closer to the wake time it sleeps for shorter and shorter durations and thus over-sleeps by less each time.

@ethouris
Copy link
Collaborator

Might be that we could add such a feature to configure the live sleep time with configurable number of time slices and the length of the first slice. Especially that sleep inaccuracy is not an easy problem to solve, and moreover, it's more of a problem on Windows. Especially very short sleeps are something hard to achieve, probably even a spin lock with yield would yield (XD) a better result. That could be not the most important thing to do in general, but it may allow further development and research.

@oviano
Copy link
Contributor Author

oviano commented Mar 13, 2024

The thing I can't quite figure out at the moment is why the equivalent of the USE_BUSY_WAITING code, when placed here, does not have the desired effect of smoothing the release of packets and is less effective (but uses more CPU) then my rudimentary "iteratively sleep for half of what is remaining" hack.

I'm testing in macOS right now., so the "td_threshold" is 1ms but I think there might be situations where it is still oversleeping, because let's say we want it to sleep for 8ms, then the USE_BUSY_WAITING path would tell it to sleep for 7ms and then spin for the last 1ms. But I think that 7ms still oversleeps enough time to make a difference.

There may also be other factors at play; the unit tests were probably run in isolation, but when the machine is under load, a load of other threads actually doing other stuff, then maybe we don't get the same results in reality as the unit tests.

The answer might be a combination of both concepts; instead of sleeping for the 7ms above, we would sleep for 3.5ms and loop again, but still, when we get to the last 1ms, we run the tight loop.

@ethouris
Copy link
Collaborator

Not sure if this helps, but note that the condition for having the decoding running smoothly is:

  1. The very first video I-frame - after the PMT was seen and after all TS packets to complete the PES packet of the video frame are received - the LAST UDP PACKET that delivers the remaining data of the frame latches the base time. Everything that has been received prior to this is unimportant.
  2. Now, all other packets may be delivered even with some jitter, it doesn't matter. What matters is that when the next I-Frame comes, the last UDP packet delivering the remaining data of it must be delivered to the application exactly at the time (or at least not later than) that is equal to the time of delivering the end of previous I-Frame plus the time difference between the timstamps of this frame and the previous I-Frame.

Of course, SRT can't do anything to ensure this, because it's content-agnostic, but there are several things that could be done, I think. First, it might be possible that SRT can be switched during transmission to "aggressive delivery" (which shall not be enabled until at least one I-Frame is delivered). This "aggressive delivery" could cause that the sleep time is taken only of its 8/10 slice to sleep and then if the remaining time is evaluated as less than some "delta" (probably the value should be somehow corelated with the bitrate), then the packet should be delivered to the application even if it is too early (that is, you sleep for 7ms and then simply deliver the packet without spin-waiting for this 1ms).

There's also some thing that the sender application can do: The timeslots assigned to subsequent fragments of the MPEG-TS stream can be adjusted to the absolute accuracy required by the distance between last packets of two subsequent I-Frames and the time would be explicitly specified at the sending call, without relying on the library to do it for you.

@oviano
Copy link
Contributor Author

oviano commented Mar 14, 2024

Thanks for your thoughts!

Actually it's not using MPEG-TS, it's a custom/slimmed-down container format of my own. Essentially just sending the raw video and audio streams with as little overhead as possible.

I-frames are only sent once at the start of the stream, and again on request in the event of a discontinuity in the video stream.

I've gone through quite a few variations, but I had ended up with around a 100ms buffer after SRT specifically to smooth out the packets coming out of SRT so they can be played smoothly by my player. Then I found that due to drift, and so forth, this buffer could change size. So then I ended up adding all sorts of complicated code to track this buffer size and adjust playback speed etc to keep it under control. This itself required additional audio buffering (for time stretching the audio samples) and it also wasn't 100% reliable and could sometimes break with extreme network conditions.

Since my project is one about remote controlling a remote source, latency is important - when the player requests a channel change or navigates the remote source's UI, it needs to be responsive. All this added latency was making it feel sluggish. The 100ms jitter buffer + it needed another 100ms for time-stretching the audio samples. So another 200ms ontop of the SRT buffer and any other latency (the small time taken to initiate a control request, and also the server end-to-end capture-to-encode latency, although that is only around 20ms).

So lately, I've sort of gone back to basics; I shouldn't really need another buffer after the SRT one, it's 100ms+ after all. That should be my only real buffer, apart from a small audio buffer plus I also maintain 2-3 video frames just so I can nicely lock playback to the refresh rate of the device.

So my challenge was to see if I can try and get the audio and video packets delivered from SRT in as rock-solid a manner as possible so I can feed them directly into the playback engine.

I've more or less succeeded with two key changes:

  1. not having multiple SRT connections open at once (I had 4). Even 3 idle connections screws things up.
  2. my hack to reduce "over sleeping" in the tsbpd release code

Both of these have *dramatically improved the consistency of the output, as shown in some of my posts further up.

You mention releasing packets early - but actually, I wouldn't want that, because if that happens with a few video packets in a row, for example, then my player is going to overflow it's small video frame buffer (I mentioned 2-3 frames) and then when packets revert to being on-time, it's going to risk stutter. What I want is for complete video frames to be released as consistently as possible with respect to one another. Any adjustments due to drift would be ok, it would maybe jump a frame occasionally to "re-align".

In terms of scheduling the packet sending on the server, I assume you mean delaying sending a complete frame according to its PTS/DTS. I have experimented with this before, but again you end up having to maintain a small buffer on the server to "schedule" these packets, a buffer which can drift one way or other over time and introduces another headache.

Giving all sub-packets of a video frame the same SRT send time does make sense though, for sure, as there is no benefit to waiting slightly longer for the last packet to arrive before releasing it to the player.

*on Apple platforms. This approach won't work for Windows as the default granularity seems to be 15ms, which is way too high.

@ethouris
Copy link
Collaborator

Wow, you stole one of my ideas, buddy! XD

But then, if it's not MPEG-TS, and you are using the idea of sending difference frames only, except for the first-after-discont I-Frame, then you should have even less problems with splitter synchronization. All difference frames should be able to be sent long ahead of their play time, so there's quite a lot of jitter buffer. For such type of transfer I think SRT should better support a little different working mode: Apply a delay as it normally happens, as long as you are waiting for the first I-Frame (or simply you send it as a first data portion anyway), and AFTER the receiver has confirmed its reception, the application should set the "immediate mode", that is, data are delivered to the application from SRT without any delay, just as soon as they are available. Waiting in the buffer for the "play time" should only happen in case of lost packets, that is, you apply a drop only when the play time comes for the packet following the drop gap, but when the packet is available in the bufffer's first cell, the application gets it, no matter if its play time has come or not.

This mode would still require a cooperation with the application, so that the application can also restore the "normal live mode" as long as it detects that the next I-Frame is coming - but that's not a problem because the application will receive this packet anyway, while it's the first from the bunch, so it can then set the normal mode again before the last packet of the I-Frame is delivered.

This can just as well be done a little bit differently, that is, the immediate mode is set for the whole time, but the application receives a packet always immediately, except that it gets also the delivery time information. Then the application can decide for itself whether this packet should be sent to the splitter right now (that is, when exactly it will be submitted to the decoder) and up to which time it has to wait for it.

I have seen earlier no sense in doing this because it complicates the application development, but maybe it could be helpful in certain use cases.

@oviano
Copy link
Contributor Author

oviano commented Mar 14, 2024

Wow, you stole one of my ideas, buddy! XD

If you mean the idea of sending an I-frame only at the start or on a discontinuity then it would seem an ideal enhancement to SRT, as I assume most use-cases of SRT are one-to-one transmission? (I'm not sure on that)

But then, if it's not MPEG-TS, and you are using the idea of sending difference frames only, except for the first-after-discont I-Frame, then you should have even less problems with splitter synchronization. All difference frames should be able to be sent long ahead of their play time, so there's quite a lot of jitter buffer.

Well, not if you want to have as low-latency as possible from capture-encode-transmit-receive-decode-display.

For such type of transfer I think SRT should better support a little different working mode: Apply a delay as it normally happens, as long as you are waiting for the first I-Frame (or simply you send it as a first data portion anyway), and AFTER the receiver has confirmed its reception, the application should set the "immediate mode", that is, data are delivered to the application from SRT without any delay, just as soon as they are available. Waiting in the buffer for the "play time" should only happen in case of lost packets, that is, you apply a drop only when the play time comes for the packet following the drop gap, but when the packet is available in the bufffer's first cell, the application gets it, no matter if its play time has come or not.

It makes sense, but perhaps rather than the application set a mode, it would be a flag set on the packet on send? i.e. scheduled_packet, meaning, deliver this according to its timestamp vs deliver this to the application asap.

Besides, if all the packets for a video frame were given the same send time, does this add anything, as you'd get all the subsequent packets as soon as they are available in any case?

@ethouris
Copy link
Collaborator

Wow, you stole one of my ideas, buddy! XD

If you mean the idea of sending an I-frame only at the start or on a discontinuity then it would seem an ideal enhancement to SRT, as I assume most use-cases of SRT are one-to-one transmission? (I'm not sure on that)

Yes, that one. It came to my mind some time ago that MPEG-TS was intended with the use of UDP possibly with FEC, that's why you need the I-Frame refreshed once per a time and the configuration tables repeated. But if you have a reliable connection with SRT, you can get the most extensive information just in the beginning, and then simply send only difference frames.

But then, if it's not MPEG-TS, and you are using the idea of sending difference frames only, except for the first-after-discont I-Frame, then you should have even less problems with splitter synchronization. All difference frames should be able to be sent long ahead of their play time, so there's quite a lot of jitter buffer.

Well, not if you want to have as low-latency as possible from capture-encode-transmit-receive-decode-display.

Exactly when you want as low-latency as possible. Encoding and sending an I-Frame will take a lot of time anyway and you will have to spend that time anyway, including on the reception side before you send it to the encoder. But sending next difference frames will take less time and therefore they can be submitted to the decoder way earlier than it happened to the I-Frame.

It makes sense, but perhaps rather than the application set a mode, it would be a flag set on the packet on send? i.e. scheduled_packet, meaning, deliver this according to its timestamp vs deliver this to the application asap.

Could be, but this requires some changes, if not inventions, in the protocol. While this immediate mode can be simply implemented in the library.

Besides, if all the packets for a video frame were given the same send time, does this add anything, as you'd get all the subsequent packets as soon as they are available in any case?

Hmm, you know, I didn't think about it. But you may be right. There's just one small problem though - earlier delivery times may mean earlier play time for a packet, which means that an earlier drop preference may kill all efforts of retransmission.

What I was talking about was that the ready-delivery can happen earlier (once the packet is available), but drop preference remains the same as it was - SRT would wait for the proper time to play until it delivers a packet after drop.

Another thing that I have missed a bit is that for such a protocol you should unfortunately turn off TLPKTDROP. It was relatievely harmless for MPEG-TS (a drop can spoil the video only up to the next I-Frame), but in a system where you have only one I-Frame, one dropped packet can ruin the whole tapestry. The only way to respond to a choking network could be to drop frames on the sender side and enforce discontinuity (possibly with changed parameters).

@oviano
Copy link
Contributor Author

oviano commented Mar 14, 2024

Again though, if you want latency to be absolute minimum then you want to deliver complete frames on-time. Not early, not late, just on time.

If you deliver early, then what do you do? Well, if you play them early, then what happens when the next one is on-time? There is potentially a gap in playback. So maybe you buffer the early frame, but then what have you gained, SRT could have just held onto it until the play time...

Any difference in decoding time between I-frame and non-i-frame is negligible with hardware decoding.

Maybe we're talking a little bit at cross-purposes here, I'm not sure :)

Either way though, at some point a more accurate release of packets would be helpful. I think maybe something like USE_BUSY_WAITING applied to tsbpd release would be a start - but as mentioned above, it still doesn't seem to stop "over sleep" on Apple platforms anyway. I found that a hybrid approach - iterative half-sleep + busy wait does work, but it seems annoying to have to have any element of busy wait on Apple platforms if the iterative half-sleep is sufficient. So maybe the busy wait is a Windows-only thing (I'd be interested to see if applying the iterative half-sleep approach would improve send time accuracy on non-Windows platforms and negate the need for a busy loop).

@ethouris
Copy link
Collaborator

Whatever will be done with the frames that your application receives, it's up to your application.

I'm talking about SRT and whether it can deliver earlier the packets for particular frame so that your application is given some advantage of time that it can just as well waste itself, if it wishes, or do something else. If SRT delivers these packets exactly on time, it doesn't leave the application any choice.

Playback is a completely differnet topic - this should be done in the rhythm of the frame timestamps, always. Not at the time they come out of the decoder, should they come any earlier in that situation.

@oviano
Copy link
Contributor Author

oviano commented Mar 14, 2024

Whatever will be done with the frames that your application receives, it's up to your application.

I'm talking about SRT and whether it can deliver earlier the packets for particular frame so that your application is given some advantage of time that it can just as well waste itself, if it wishes, or do something else. If SRT delivers these packets exactly on time, it doesn't leave the application any choice.

Yep. I think the point I am making is that I don't want the application to have to make a choice. A choice means to play or not, to decode or not, and likely involves another buffer.

Playback is a completely differnet topic - this should be done in the rhythm of the frame timestamps, always. Not at the time they come out of the decoder, should they come any earlier in that situation.

Interestingly, in my use-case, with my current test, I am not even using the timestamps. I know that sounds peculiar, but with the latency now so low, and hardware encoding and decoding on each side, CBR as the encode mode, and with more accurate SRT tsbpd release, all I need to have is a three-frame video frame buffer; I essentially populate this buffer as frames come in, and for each screen refresh I play back the next frame by deciding which frame number should be played at the current screen refresh (taking into account any mismatch between refresh rate and frame rate).

Any variation between decode-time, or whatever else, is "absorbed" by the small video frame buffer. Audio works similarly.

I previously had exactly that - waiting until a frame's play time according to PTS, but that is a buffer, and that buffer gets screwed by drift between SRT packet release time and play time according to PTS. So then we're back to managing that buffer as it fluctuates.

@oviano
Copy link
Contributor Author

oviano commented Apr 14, 2024

Wow, you stole one of my ideas, buddy! XD

This can just as well be done a little bit differently, that is, the immediate mode is set for the whole time, but the application receives a packet always immediately, except that it gets also the delivery time information. Then the application can decide for itself whether this packet should be sent to the splitter right now (that is, when exactly it will be submitted to the decoder) and up to which time it has to wait for it.

Just picking up on this, as it's an interesting concept.

Some brief background info; my application allows control and streaming of a remote TV source. For example, I have an Apple TV setup in my brother's house in Stockholm (I am in London, latency is around 40ms) and this allows me to stream and control that Apple TV and watch stuff that I can't necessarily watch here in the UK. I have another similar setup in Istanbul (latency is around 80ms). The app needs to allow responsive control of the remote UI, thus low latency is critical. At the same time, when the user is not navigating the UI one wants a stable glitch-free stream. At the moment I make a minor adjustment to my final video frame "jitter buffer" depending on which mode we are in; if the user has controlled the remote source in the last 5s I reduce the jitter buffer to the minimum, otherwise it's larger, so that any fluctuations in decode time, or packet reception is smoothed out.

However, I'm still currently limited by whatever the SRT latency is set to for the stream (and it's messy to restart the stream with a different latency etc). So in the case of Istanbul, it's around 320ms. What I'm wondering is if I could use the sort of setting you describe above to allow receiving packets asap when in low-latency mode, and present them asap; so it'd lead to a more jittery picture but only while controlling the remote source; once the user stops controlling the source it would then revert to normal mode where packets are released on time, or, as you suggest, they are always released asap, but with timing information so that my application can sleep until the play time instead. This would actually be preferable as I would probably only want to present the video packets asap in this mode, and keep the audio stream stable as it is more annoying when audio breaks-up and has to insert silence than a jittery picture.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Type: Question Questions or things that require clarification
Projects
None yet
Development

No branches or pull requests

3 participants