Flask SocketIO Gunicorn Multiple Workers behind NGINX with Message Queue, correct understanding? #2045
-
Hi @miguelgrinberg, we are currently running our app with gunicon and eventlet (gunicorn --worker-class eventlet -w 1 module:app) behind a nginx reverse proxy. All is well except we feel 1 worker might not be enough performance wise and need to increase the number. Looking at the documentation, Deployment Section, (https://flask-socketio.readthedocs.io/en/latest/deployment.html), if I understand correctly, adding a message queue would allow to run multiple workers together. Doest that mean
If the answer is 1, then I assume all the different gunicorn -w 1 workers could be hosted on different machines as well as long as our NGINX is well configured. Thanks for confirming our understanding and thanks again for the great lib. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Option 1 is preferred, because it is fully compatible with the Socket.IO protocol. You can run multiple Gunicorn processes all on the same machine, each listening on a different port. Normally you'd want at least as many server processes as you have CPUs. Option 2 can work too, but it is incompatible with the long-polling transport of Socket.IO. As long as you configure all your clients to connect directly through the WebSocket transport everything should work well, but long-polling clients will get 400 errors and will be unable to connect. In this case you will still need the message queue, but you can run a single Gunicorn server with multiple workers, probably as many as you have CPUs or more. |
Beta Was this translation helpful? Give feedback.
Option 1 is preferred, because it is fully compatible with the Socket.IO protocol. You can run multiple Gunicorn processes all on the same machine, each listening on a different port. Normally you'd want at least as many server processes as you have CPUs.
Option 2 can work too, but it is incompatible with the long-polling transport of Socket.IO. As long as you configure all your clients to connect directly through the WebSocket transport everything should work well, but long-polling clients will get 400 errors and will be unable to connect. In this case you will still need the message queue, but you can run a single Gunicorn server with multiple workers, probably as many as you have CPUs …