Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve expect 100-continue handling #4488

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

kanongil
Copy link
Contributor

This fixes 2 issues in the existing handling.

The most serious is the one exposed in the "handles expect 100-continue on undefined routes". These routes bypass most of the normal processing, meaning that the server never signals for the client to continue. This causes an infinite stall when internals.drain() is called, since the client hasn't been instructed to send data.

The other issue, is that when overriding request.payload, hapi will still request the client to continue sending the payload, even though it will never be used.

The first test is just to verify that hapi will skip requesting the client to continue when processing is aborted before the "payload processing" stage.

@kanongil kanongil added the bug Bug or defect label Mar 15, 2024
@kanongil
Copy link
Contributor Author

The node 18 test failures are not related to this patch. It seems that it has been updated to no longer process requests after the listener has been closed.

if (request._isPayloadPending) {
await internals.drain(request);
}
await internals.drain(request);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have a guarantee that there's something to drain? I feel that this function is too brittle, should we use stream.finished to make sure all scenarios are covered?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is not a concern from this PR, right?

I tend to agree. The current drain() logic requires that the stream is active to work. An already closed stream would stall forever.

I would be wary to rely on stream.finished(), as I have seen streams that causes issues with it in the past.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not from this PR no, but removing that conditional made me look further whether it was solid or not, so I hope there's something to consume there, otherwise it's indeed stalled.

Copy link
Contributor Author

@kanongil kanongil Apr 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, draining might not be the best approach, but I retained the current behaviour.

The alternative is to just .destroy() the stream, which should work fine, but for HTTP/1 it will also destroy the connection. Draining is more graceful, but can needlessly use up bandwidth. It works best for small pending payloads.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Bug or defect
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants