-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Active-Active Zookeeper Operator Deployment #245
Comments
Not currently (this is also blocked on an upstream issue, kube-rs/kube#485). That said, the managed ZooKeeper should still be available, even if the operator managing it is down. |
That is true. I think beeing able to deploy the zookeeper operator in an active active fashion does help from this two design goals:
I admit, both cases are edge cases. But from an operational safety perspective those are |
Sadly, I'm not sure this would do much to help with goal 1, since you'd just end up stuck waiting for the lease to expire, rather than the pod being deleted. Safe forced lease-takeover would require fencing, either at the networking or Kubernetes API layer. Neither of those are available out of the box for Kubernetes clusters. |
Yes, but the lease will expire at some point in time. The pod was in state terminating for 11h. The lease has a duration in the range of minutes. |
Hello,
is it possible to run two instances of the zookeeper operator? I didn't find an option to activate some sort of leader election. Many operators use some sort of leader election to allow the deployment of multiple instances to mitigate node failures in the k8s cluster.
The text was updated successfully, but these errors were encountered: