You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! I'm new here, happy to give this a try if you accept external contributions!
Problem
Keycloak pods are stateful due to the embedded infinispan cache that requires cache key rebalancing across all live nodes when a pod joins or leaves the cluster. (This process happens on pod startup and shutdown)
The cache keys have a set number of owners (pods) that can survive one node failure with its default config (https://www.keycloak.org/server/caching#_configuring_caches). This means that we need to be able to control the HPA behaviour when scaling down to avoid potential data loss if too many pods are terminated, and scaleUp to avoid excessive cache rebalancing if too many new pods are scheduled at the same time.
What is the feature you are proposing to solve the problem?
Add the ability to configure the scaling behaviour in the HPA spec, similar to the chart below.
Name and Version
bitnami/keycloak 21.1.3
What is the problem this feature will solve?
Hi! I'm new here, happy to give this a try if you accept external contributions!
Problem
Keycloak pods are stateful due to the embedded infinispan cache that requires cache key rebalancing across all live nodes when a pod joins or leaves the cluster. (This process happens on pod startup and shutdown)
The cache keys have a set number of owners (pods) that can survive one node failure with its default config (https://www.keycloak.org/server/caching#_configuring_caches). This means that we need to be able to control the HPA behaviour when scaling down to avoid potential data loss if too many pods are terminated, and scaleUp to avoid excessive cache rebalancing if too many new pods are scheduled at the same time.
What is the feature you are proposing to solve the problem?
Add the ability to configure the scaling behaviour in the HPA spec, similar to the chart below.
https://github.com/codecentric/helm-charts/blob/master/charts%2Fkeycloak%2Ftemplates%2Fhpa.yaml#L24-L25
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#configurable-scaling-behavior
What alternatives have you considered?
Disabling HPA or have right ranges of min and max pods.
The text was updated successfully, but these errors were encountered: