Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

strategy for stream chunk-splitted stuctures #41

Open
jeromew opened this issue Jun 26, 2020 · 2 comments
Open

strategy for stream chunk-splitted stuctures #41

jeromew opened this issue Jun 26, 2020 · 2 comments

Comments

@jeromew
Copy link

jeromew commented Jun 26, 2020

Hello,

When working on network streams, it happens that binary protocol chunks are split randomly by routers or whatever is on the path between the client or the server.

I understand that restructure decoding works via DecodeStream(buffer) and that the buffer needs to match the structure boundaries (maybe my understanding is not correct)

The closest explanation/partial solution to what I mean is described on https://stackoverflow.com/questions/52267098/whats-the-fastest-way-to-parse-node-js-buffer-stream-chunks-binary-data-into-s/52333431

Is there a plan to make DecodeStream work with streams and be aware of boundary issues ?

@mykiimike
Copy link

mykiimike commented Jun 28, 2020

Is there a plan to make DecodeStream work with streams and be aware of boundary issues ?

To do so, a field corresponding to the packet size must be placed at the beginning of each packet like what VersionedStruct does with version number. It affects the stream design approach, it is no nothing :(

@jeromew
Copy link
Author

jeromew commented Jun 30, 2020

If I understand correctly, you suggest to add the packet size at the start of each packet. This would be a modification of the network protocol.

I was trying to understand how, given a protocol, I could use restructure to parse it knowing that the chunks I receive are not always aligned on the restructure specifications.

It seems like https://www.npmjs.com/package/binary-data handles this by waiting for additional chunks when there is not enough data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants