New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add horizontal pod autoscaler for backend and frontend via helm charts #1633
Conversation
- use cpu and memory averages - adjust base memory + cpu for backend - threshold set to 80% cpu and 95% memory utilization by default (configurable in values.yaml) - instead of backend and frontend replicas, set max replicas in values.yaml - only enable hpa if backend_max_replicas or frontend_max_replicas is >1, default to 1 for now
@vnznznz wondering if you have any thoughts on this as well |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The configuration for HPA looks good, but if I'm reading the documentation for it correctly, it requires metrics APIs to be available, generally via metrics-server.
As far as I can tell, metrics-server isn't defined in our chart as yet, and we don't make mention of it in our documentation. So assuming I'm not missing something, I think at minimum we should add a documentation section on horizontal autoscaling that explains the need to install metrics server, or perhaps we should just add it to the chart so it's always there (would still need deployment doc updates in this case too I think).
Edit: I do see that we enable metrics-server in our Ansible playbooks, so that's good!
Yeah, I think many k8s infrastructure providers already provide it as an add-on, so hesitate to add it as a dependency. Microk8s also comes with it. We actually detect in the operator if the metrics-server is available already, since its used for Redis autoscaling. Perhaps we can do the same for this as well, will double check. |
Since this is a chart change, nothing to detect here. Updated the docs to mention metrics-server. |
Having separate settings seems more flexible, if folks want to set different maximum replica numbers I suppose? I'd suspect our backend is more memory heavy and the need to scale might hit first? Curious about @vnznznz's take |
Supports horizontal pod autoscaling (hpa) for backend and frontend pods: