Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add default resource request/limits for SpiceDB Cluster #247

Open
jawnsy opened this issue Aug 20, 2023 · 3 comments
Open

Add default resource request/limits for SpiceDB Cluster #247

jawnsy opened this issue Aug 20, 2023 · 3 comments

Comments

@jawnsy
Copy link
Contributor

jawnsy commented Aug 20, 2023

Summary

Add CPU and memory request/limits for SpiceDB Cluster deployment

Details

The SpiceDB Deployment manifests are missing resource specs, which means that the resulting pod will have BestEffort quality of service. This may be non-trivial to set to a reasonable value because resource usage may differ according to user workloads.

For completeness, adding a request would be useful to force the cluster autoscaler to make room for the operator (guaranteeing that some forward progress will be made) and limits would be useful to make issues like leaks more apparent.

Proposal

Add a small request and reasonably high limit for the deployment pod spec based on current usage. This will vary based on usage, so having an option in the CRD to override it seems prudent.

As a starting point, something like this seems suitable:

    resources:
      requests:
        memory: "256Mi"
        cpu: "250m"
      limits:
        memory: "1Gi"
        cpu: "2000m"
@ecordell
Copy link
Contributor

ecordell commented Aug 20, 2023

This is possible today via the patches API:

spec:
  patches:
  - kind: Deployment
    patch:
      spec:
        template:
          spec:
            containers:
            - name: spicedb
               resources:
                 requests:
                   memory: "256Mi"
                   cpu: "250m"
                 limits:
                   memory: "1Gi"
                   cpu: "2000m"

Although as you note, it's usually better to run SpiceDB in a guaranteed QoS class, and for production clusters we tend to use static cpu allocation as well.

I have considered that it could be useful to have something like this instead just to remove some nesting:

spec:
  patches:
  - container: spicedb
    patch:
      resources:
         requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "1Gi"
            cpu: "2000m"

but the current example is only a couple more nested levels than the Deployment API itself, which folks generally have no problem writing.

@n0rthernstar
Copy link

n0rthernstar commented Oct 5, 2023

Hello @ecordell could you please clarify if the mentioned example need to be added into the yaml of kind: SpiceDBCluster:

apiVersion: authzed.com/v1alpha1
kind: SpiceDBCluster
metadata:
  name: spicedb-cluser
  namespace: spicedb
spec:
  patches:
    - container: spicedb
      patch:
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "1Gi"
            cpu: "1000m"
  config:
    replicas: 1
    datastoreEngine: "postgres"
    telemetryEndpoint: ""
  secretName: spicedb-config

or in another?

@ecordell
Copy link
Contributor

ecordell commented Oct 5, 2023

Yep, it's in the SpiceDBCluster API.

Full example:

apiVersion: authzed.com/v1alpha1
kind: SpiceDBCluster
metadata:
  name: spicedb-cluster
  namespace: spicedb
spec:
  config:
    replicas: 1
    datastoreEngine: "postgres"
    telemetryEndpoint: ""
  secretName: spicedb-config
  patches:
  - kind: Deployment
    patch:
      spec:
        template:
          spec:
            containers:
            - name: spicedb
               resources:
                 requests:
                   memory: "256Mi"
                   cpu: "250m"
                 limits:
                   memory: "1Gi"
                   cpu: "2000m"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants