Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to upload as fast and soon as possible #489

Open
piranna opened this issue Oct 17, 2022 · 5 comments
Open

How to upload as fast and soon as possible #489

piranna opened this issue Oct 17, 2022 · 5 comments
Labels

Comments

@piranna
Copy link

piranna commented Oct 17, 2022

Question

When using a Readable as input data, it's needed to provide a chunkSize. This seems to bufferize the data and send It in batch nce the number of bytes is reached, and it's mostly intended for platforms that require data to be uploaded on fixed size chunks, but also add delays for realtime, specially when this size limitation doesn't exist. How can I remove It and upload content the way i received? I have tried to use a small chunkSize but content is being split and sent in múltiple requests. How can I send the original Buffers at the same rate the Readable provides them?

Setup details
Please provide following details, if applicable to your situation:

  • Runtime environment: [e.g. Browser, Node.js, React Native]
  • Used tus-js-client version: [can be obtained by running npm ls]
  • Used tus server software: [e.g. tusd, tus-node-server etc]
@Acconut
Copy link
Member

Acconut commented Oct 18, 2022

This seems to bufferize the data and send It in batch nce the number of bytes is reached,

That is correct.

t's mostly intended for platforms that require data to be uploaded on fixed size chunks

That is not correct. Chunk size is required for readable stream because it specifies the size of data that might must be retransmitted in the case of a network error. Read more about it at https://github.com/tus/tus-js-client/blob/master/docs/api.md#chunksize.

How can I send the original Buffers at the same rate the Readable provides them?

In previous versions, we had a system where data from a readable stream was uploaded while also filling the buffer. This was basically what you are talking about. However, it was very buggy, messy and we could not get it working entirely, so we replaced it with the current approach where we first fill the buffer and then send the request. It is better to have something fully working than something more complex, but broken.

So, I guess, tus-js-client might not be able to do what you are looking for. I am not sure what data you are transmitted, but tus is not intended for real-time data transfer in the sense of video calls. Hope this helps.

@piranna
Copy link
Author

piranna commented Oct 18, 2022

it specifies the size of data that might must be retransmitted in the case of a network error.

So, it's just the size of the client-side buffer? The specs lead to thing otherwise, since it talks about adjust it to the prefered chunk size for the actual platform to be used.

I am not sure what data you are transmitted, but tus is not intended for real-time data transfer in the sense of video calls.

Exactly like that, upload of real-time videos for LL-HLS, so I need the less delay as possible. I wanted to use TUS in my demo as a showcase of a standard protocol, specially since it provides feedback of how much info has been acknowledged, so I can clean-up sender buffers. WebTransport could be another alternative (maybe better for my use case?), but it's not widely adopted and seems there's no implementation for Node.js yet.

@Acconut
Copy link
Member

Acconut commented Oct 21, 2022

So, it's just the size of the client-side buffer? The specs lead to thing otherwise, since it talks about adjust it to the prefered chunk size for the actual platform to be used.

The chunk size option is tus-js-client is both: The client buffer size (if a buffer is needed) and the maximum request payload size.

upload of real-time videos for LL-HLS

I see and understand the requirements of your situation. Real-time video streaming is not really my area of expertise, so I cannot help you a lot here. However, tus and tus-js-client has not really been developed for live video streaming. For file uploads, we require that the resource is transferred as is without loosing a single byte. However, AFAIK, for video streaming it is acceptable if single frames are lost as the bitrate is always adapted to the connection throughput. So I can imagine that tus has a different design goal than you are looking for right now.

Let me know what you think!

@piranna
Copy link
Author

piranna commented Oct 21, 2022

However, AFAIK, for video streaming it is acceptable if single frames are lost as the bitrate is always adapted to the connection throughput.

It depends! :-) For sending it to final clients, has some losses is aceptable because it's realtime and lower latency is the maximum metric, but in this case, we are uploading the content "as is" so later it's possible to be referenced with byteranges on an HLS playlist. I plan to use also the uploaded content for LL-HLS (Low Latency HLS) streaming, but since I keep the info buffered in my server and don't delete it until I have confirmation it has been properly uploaded to the CDN, it's safe to have some extra delays. For regular HLS / VoD streaming, there's no problem at all on having some delays, and video streams can be seen as regular data files :-)

@Acconut
Copy link
Member

Acconut commented Oct 22, 2022

Ok, that makes sense. From that perspective, it makes sense to use tus for the video upload. But tus-js-client may not be the best option for you because of its current buffering behavior.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants