New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation: Server requirements/recomendations #1211
Comments
Hey @schonert! TL;DR: it's better to have multiple small instances than a single big one. The rule of thumb is 2 CPUs per instance and 2 workers (concurrency) per CPU. If the average downloading time is much less than the average processing time, then it makes sense to lower workers per CPU number and spin out more instances. imgproxy itself and its dependencies have multiple global mutexes under the hood. Also, the system threads scheduler is not completely free. Thus, multiple processes perform better than a single process. The best setup for a uniform load is 1 instance per CPU. However, most load balancers are pretty stupid and use the round-robin algorithm. This may lead to a situation where an instance is busy processing a heavy request but the load balancer routes another request to it, so it's better to have a spare CPU. imgproxy doesn't use GPU, so there's no sense in spending money on it. The memory size doesn't affect performance either. Just use enough memory to store the source image file, decoded source image, and the resulting image file, and it should be fine. If you use ML options, you'll also need enough memory to fit neural network models. The models distributed with imgproxy Pro are pretty small though. Since you have been running imgproxy for a pretty long time, you are probably pretty aware of how much memory it uses in your case. What really affects performance is the type of CPU:
|
Brilliant reply @DarthSim - appreciated! We're slowly migrating away from AWS to Hetzner. However after some testing we're going to keep our imgproxy instances on AWS to keep them close to our S3. Even tho the new VPS are close to the S3 origin - it added a 400~500ms to the processing time. I assume this is due to the VPS not being on same VPN as S3. I'll see if we can move over to some c7g's - great insights. Thanks! |
We're moving around a bit and would like to see if there would be any gains in switching to a different server type.
It would be great to have a bit of documentation on requirements and recommendations.
With recommendations, I'm curious about:
The text was updated successfully, but these errors were encountered: