Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

slow image serving with (changed) static_files example #243

Closed
haheute opened this issue Mar 26, 2017 · 4 comments
Closed

slow image serving with (changed) static_files example #243

haheute opened this issue Mar 26, 2017 · 4 comments
Labels
question A question (converts to discussion)

Comments

@haheute
Copy link

haheute commented Mar 26, 2017

I try to learn rocket and experience partially slow image (.jpg) serving in my test application.
I changed the static_files example to see if it is the same, there.

I added this to the html file:

    <img src="flvr9704_tn.jpg">
    <img src="flvr9703_tn.jpg">
    <img src="flvr9702_tn.jpg">
    <img src="flvr9701_tn.jpg">
    <img src="flvr9700_tn.jpg">
    <img src="flvr9698_tn.jpg">
    <img src="flvr9697_tn.jpg">
    <img src="flvr9696_tn.jpg">
    <img src="flvr9694_tn.jpg">
    <img src="flvr9693_tn.jpg">

and copied these thumbnail images into the static folder of the example. (300x200px jpg)
It takes about 5 seconds until I see all photos in the browser. The first 8 appear fast, and then it takes some seconds. ....also until I see the requests in the terminal:

GET /flvr9693_tn.jpg:
GET /flvr9697_tn.jpg:
GET /flvr9698_tn.jpg:
    => Matched: GET /<file..>
    => Matched: GET /<file..>
    => Outcome: Success
    => Matched: GET /<file..>
    => Outcome: Success
    => Outcome: Success
    => Matched: GET /<file..>
    => Response succeeded.
    => Outcome: Success
    => Response succeeded.
    => Response succeeded.
    => Response succeeded.
GET /flvr9694_tn.jpg:
GET /flvr9700_tn.jpg:
    => Matched: GET /<file..>
    => Matched: GET /<file..>
    => Outcome: Success
    => Outcome: Success
    => Response succeeded.
    => Response succeeded.

I don't know what this could be. Perhaps someone can help.?
(it is cargo updated to 0.2.3 and I did a git pull in the rocket directory, to have it up to date)

@SergioBenitez
Copy link
Member

SergioBenitez commented Mar 26, 2017

What's almost certainly happening here is that your browser is opening one connection per file and using keep-alive to keep the connection open even after an image has loaded. Once there are as many connections as there are workers, the server won't accept new connections until an existing connection closes. Because the keep-alive timeout is set to five seconds, you'll get a new connection in about that time, and thus, see a pause of ~5 seconds. Your machine probably has 8 logical cores.

This issue is caused by the synchronous nature of the server. Once Rocket is fully asynchronous (#17), this won't happen. Until then, you have two options:

  1. Increase the worker count to a high number, say, 128. You can do this by setting ROCKET_WORKERS=128 or via the workers config variable in Rocket.toml. This is a good choice during development.
  2. Place Rocket behind a reverse proxy, like NGINX. The reverse proxy will handle connections asynchronously, and even do HTTP/2, and proxy an HTTP/1.1 connection that it closes each time, completely circumventing this issue. This is the recommended solution for public-facing applications.

@SergioBenitez SergioBenitez added the question A question (converts to discussion) label Mar 26, 2017
@haheute
Copy link
Author

haheute commented Mar 26, 2017

Thank you. With workers = 128, the images appear immediately, and with ROCKET_LOG=debug I also saw some keep-alive messages, but did not understand them, like:
ioerror in keepalive loop = Error { repr: Os { code: 11, message: "Resource temporarily unavailable" } }

@Type1J
Copy link

Type1J commented Jan 26, 2020

@SergioBenitez, how is the async conversion going? Is there anything that you need tested?

@jebrosen
Copy link
Collaborator

#1065 is the async tracking issue. I think the checklist is a bit outdated; I can share an update in a day or two. As far as this specific bug report, I have not tested it individually but it should be resolved because the number of concurrent requests has increased from workers to .

ZNielsen added a commit to ZNielsen/PodRacer that referenced this issue Apr 26, 2021
Per rwf2/Rocket#243, attempting to
fix this by jacking up the number of workers. Once async is done, this
shouldn't be needed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question A question (converts to discussion)
Projects
None yet
Development

No branches or pull requests

4 participants