-
Notifications
You must be signed in to change notification settings - Fork 824
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why is h2o so fast? #1497
Comments
my seige benchmark results on an Uplcoud 2GB instance using Lan connectionsCommand -- siege -v -d1 -c250 -i -f siege.txt siege.txt has 65 urls (all cached, and served by the server itself)h2oTransactions: 12766 hits nginxTransactions: 9909 hits CaddyTransactions: 12145 hits |
That means that when H2O receives a request, the HTTP parser itself, doesn't allocate any memory besides to the one necessary to receive the request. The HTTP/1 parser that H2O uses is picohttpparser, it has its own repository: https://github.com/h2o/picohttpparser . The source is small enough, and it has small tests https://github.com/h2o/picohttpparser/blob/master/test.c that demonstrate its use. |
@sudarsha These are the results for HTTP/2 ? |
Siege doesn't support it. h2load has, but I haven't got around to configuring it. |
@deweerdt And "stateless" means speed? I do not have enough English. I'm sorry if it sounded awkward to my question. |
I am learning why h2o is fast.
When I read ppt or read git's "read me" it says "fast" because it is "Unlike most parsers, it is stateless and does not allocate memory by itself. All it does is accept pointer to buffer and the output structure, and setups the pointers in the latter to point at the necessary portions of the buffer."
I do not understand what it means. I would be grateful if you explain in detail.
I would really appreciate it if you let me know.
The text was updated successfully, but these errors were encountered: