New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compressed data & chunk size fails fetch #895
Comments
Hmm, I'm somewhat sure that I had received compressed chunks when experimenting with streaming downloads, otherwise I wouldn't have gone to great length describing that scenario here: Lines 600 to 638 in 751fc4c
If the server answers that the data will be sent compressed with a HEAD request, but then doesn't send compressed chunks, then currently sokol_fetch.h indeed cannot know when the download has finished. The streaming sample here doesn't seem to use compression (e.g. the HEAD request returns with the actual uncompressed data size, probably because compression is deactivated for MPEG files): https://floooh.github.io/sokol-html5/plmpeg-sapp.html If it's only about detecting when the streamed download is complete, then I can probably look at the
...since the part after the slash is the overall size, so it's possible to just look at the chunk's That sounds like a plan. I need to look into sokol_fetch.h again soonish anyway because of #882. |
Thanks for looking at this. It appears, when a So
Will contain:
Will contain
In such cases the For ranges, I'm thinking the server can opt out of compression. I think
But it still returned uncompressed data. |
BTW, if you're going to be looking at fetch sometime, can you have a quick look at the case where a buffer is not pre-assigned. I tried the method of allocating the buffer in BTW2, For the short time i have a workaround for the range problem. It turns out i only need chunks for streaming media, which is already compressed. Fetching small text files does not need chunks as they always fit in my buffer anyhow. For now i just set BTW3, it would be nice to know whether ranges can indeed be compressed and whether the server can opt to compress each range separately. I read somewhere some CDNs do this. I have had a look around and can't find anything definite in this area. Seems to be a bit of a hole in the specifications. Thanks. |
...hmm, the cgltf-sapp.c sample works like that. The sfetch_send() calls don't assign a buffer: ...and the buffer is assigned inside the response-callback when the response is in dispatched state, using the channel and lane-indices to select a buffer: ...are you using it differently? (if yes the documentation probably needs to be improved) |
Thanks for checking this. I tried it again. Yes, the problem is only when you have a nonzero |
Also am i right in thinking assinging the buffer in dispatch will cause an additional frame delay? If so, I'll probably preassign the buffer anyhow. |
It actually shouldn't because the dispatch callback is 'short-circuited' as soon as a lane is assigned to the request and before it is enqueued for processing, there's no extra roundtrip involved (the channel and lane index lets you pick a buffer which will only be written to by this specific request, because it's guaranteed that no other request is in flight with the same channel/lane combination): Lines 2485 to 2491 in b803c9a
|
I'm having a problem with emscripten
sokol_fetch
and compressed data with chunk size;Sokol issues a
HEAD
and gets the compressed content length.Sokol issues a get range and gets uncompressed data;
And the server does not compress it (no
Content-Encoding
field). The range requested is interpreted as that of uncompressed data.So here we get the first 1K of 52K.
But Sokol stops fetching after 18042 of uncompressed data and the download is incomplete.
I don't know if this is a server problem or a Sokol problem. But it would seem the server has the option always to send the data uncompressed anyway and this is what it is doing.
Also, would it ever be the case that ranges are compressed? For example, does the server have the option to compress each range separately and therefore have completely different
Content-Length
both to the request and to anyHEAD
request?And if a range within a file were requested how would it ever be possible for to receive uncompressed data in the buffer? So i dont think the fetch buffer needs to be bigger than the chunk size ever. Except for
chunk_size=0
.The text was updated successfully, but these errors were encountered: