Skip to content

Hardware Requirements

Jakob Ackermann edited this page May 31, 2023 · 9 revisions

When provisioning hardware to run Overleaf, the main factor to consider is how many concurrent users will be running compiles.

For example, if you have a license for 100 total users, but only expect ~5 to be working at the same time, the minimal install will be sufficient. If you expect a higher proportion to be working (and compiling) simultaneously, you should consider provisioning a higher-specced server.

Minimal install

A minimum base requirement of 2 cores and 3GB memory is required for basic operations with around 5 concurrent users. This minimum requirement will also be sufficient for larger groups where there is less concurrent usage, or where it is ok for compile times to be longer during heavier usage.

If you are considering using a NFS based filesystem for your small instance please have a look at this section in the Troubleshooting page.

Scaling

As a rule of thumb, to provide a high and consistent level of service, 1 CPU core and 1GB of memory should be added to the minimal install for every 5-10 concurrent users.

This should only be taken as a guide, as factors such as the size of typical documents (larger documents use up more compile resources), how often users compile, and what tolerance there is for longer compile times during heavy usage, all affect the level of provisioning required.

Example: if you are running an Overleaf Server Pro installation for 300 total users, and regularly expect 30-60 of those users to be compiling documents at the same time, 8GB and 7 Cores (5 cores + 5GB + base of 2 cores & 3GB) should provide sufficient resources for your users to have a consistently high level of service.

Large deployments

Many of our customers look to deploy Overleaf Server Pro organization wide, or across large teams. In those situations, it is difficult for us to advise on specific setup requirements, because the use cases and underlying hardware available can be quite varied.

Example: To give an example of the hardware requirements for a larger deployment, a Server Pro installation for 1,000 total users has been set up successfully using a single server provisioned with two 4-core processors and 32GB of system memory. This has been sufficient for the team’s needs over the past year of usage.

Request for examples: If you have installed Overleaf at your organization and would like to share details of your set up to help us add to this section, please let us know.

Customers who are exceeding the limits of a single large server can take a look at Horizontal Scaling for Server Pro.

Storage

We advise against using NFS/Amazon EFS/Amazon EBS for project/history storage in larger setups and explicitly do not support it for horizontal scaling. The behaviour of these file systems is not providing the necessary performance and reliability that Server Pro needs when running at a high scale. When the file system cannot keep up with the load, the application stalls from too many blocking IO operations. These stalls can lead to overrun redis-based locks, which in turn can result in corrupted project data. We advise on using S3 compatible object storage instead. Slow S3 performance in turn only affects the upload/download of files, which only leads to an elevated number of open connections to your S3 provider and in turn does not affect the behaviour of the rest of the application. Additionally Server Pro can specify reasonable timeouts on S3 requests, which is not possible for file system/IO operations at the application level.

For reference, GitLab is following a similar stance of not supporting NFS/Amazon EFS with their self-managed offering.

Nginx specific configuration for large deployments

By default, Server Pro limits the number of connections to 768. This includes persistent Websocket connections, top-level HTML navigation and ajax requests. Once the limit is hit, the editor might not be able to connect, the editor page might not load entirely and compile requests can fail. Nginx will return status 500 responses and log worker_connections are not enough while connecting to upstream into /var/log/nginx/error.log inside the sharelatex container.

The worker_connections setting limits the number of concurrent connections nginx will accept per worker. The number of workers is controlled by the worker_processes setting and is set to 4 by default in our nginx configuration.

Nginx doesn't do much work compared to other parts of the system, so these limits act as a safety preventing too many connections from overwhelming the system. It's preferable to drop some excess connections early rather than slowing down every connection.

Server Pro exposes environment variables for adjusting these nginx settings:

  • NGINX_WORKER_PROCESSES for worker_processes (default 4)

  • NGINX_WORKER_CONNECTIONS for worker_connections (default 768)

  • NGINX_KEEPALIVE_TIMEOUT for keepalive_timeout (default 65)

    Note: When running another proxy in front of the sharelatex container (e.g. for TLS termination), the NGINX_KEEPALIVE_TIMEOUT in Server Pro needs to be larger than the previous proxy. E.g. with another nginx process on the docker host "nginx-host", here are two examples:

    • default value NGINX_KEEPALIVE_TIMEOUT, use keepalive_timeout 60s (default value in upstream) in "nginx-host"
    • custom value NGINX_KEEPALIVE_TIMEOUT=100s, use keepalive_timeout 90s (custom value in upstream) in "nginx-host"

CPU speed

LaTeX is a single threaded program, meaning it can only utilize one CPU core at a time. The CPU is also the main limitation when compiling a document. Therefore the faster the single core performance of your CPU the faster you will be able to compile a document. More cores will only help if you are trying to compile more documents than you have free CPU cores.

Clone this wiki locally