Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Take Database Count Into Consideration #286

Open
alexlo03 opened this issue May 8, 2024 · 2 comments 路 May be fixed by #287
Open

Bug: Take Database Count Into Consideration #286

alexlo03 opened this issue May 8, 2024 · 2 comments 路 May be fixed by #287

Comments

@alexlo03
Copy link

alexlo03 commented May 8, 2024

Hello and thank you 馃檹

Error we are experiencing:

Last scaling request for {"processingUnits":100} FAILED: Too many databases to decrease instance size. Current: 74, limit: 10. Please reduce the number of databases first. The minimum number of processing units that this instance currently requires is 800.

From https://cloud.google.com/spanner/quotas#database-limits

Databases per instance
For instances of 1 node (1000 processing units) and larger: 100 databases
For instances smaller than 1 node: 10 databases per 100 processing units

Possible solutions:

  1. The error returns the floor PU count, so it could handle that specific error and set that as desired state. That would be around here - https://github.com/cloudspannerecosystem/autoscaler/blob/main/src/scaler/scaler-core/index.js#L681 - the issue here is that it would continuously trigger that it wants to be smaller and then get the error from spanner.
  2. Read database count as part of the metrics and take it into account in computing desired state. (Probably superior). One issue here is that the database count is not exposed as a metric, it would maybe be added to GetSpannerMetadata https://github.com/cloudspannerecosystem/autoscaler/blob/main/src/poller/poller-core/index.js#L260
@nielm
Copy link
Collaborator

nielm commented May 10, 2024

In the short term you can set minSize in your scaling configuration to 800 to prevent it from scaling smaller than that, but I understand that this is not a long term solution if the number of databases changes rapidly.

@alexlo03
Copy link
Author

Thanks, yes we have taken that action

nielm added a commit to nielm/autoscaler that referenced this issue May 10, 2024
Instances configured with <1000 PUs only support 10 DBs per 100 PUs
so clamp the minimum size to the relevant number of PUs

Fixes cloudspannerecosystem#286
nielm added a commit to nielm/autoscaler that referenced this issue May 10, 2024
Instances configured with <1000 PUs only support 10 DBs per 100 PUs
so clamp the minimum size to the relevant number of PUs

Fixes cloudspannerecosystem#286
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants