New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sticky IPs for StatefulSet #28969
Comments
I am checking to see if Cassandra can run on DNS fully. It works for lookup of seeds, but I am seeing that IP Addresses gets bound to a token range. |
I have confirmed that Cassandra does need this. The Datastax team has toyed with the idea of using DNS, but at this time it is not support to use pure DNS with Cassandra. |
Great! do you have a bug / doc for context |
@bprashanth nope email from Datastax folks. They asked me to file a Jira if I wanted to recommend a change. |
I have an idea to implement this feature in calico-containers. I believe that a thing like sticky IPs is not something k8s is responsible for. I still however need a confirmation that this is a requirement for galera to get a blessing for this feature. |
@zefciu with calico would this be able to work with internal IPs / DNS that PetSet uses? We need a pet to have a sticky IP address. Also I understand using calico, but this may want to be self contained in k8s. |
To add more color sticky IP address that is in the internal private subnet the cluster and minions use. This is not a sticky public ip. |
The solution is to use annotations, that sent by CNI would make calico either use dynamic or static IPs. I don't know how can we solve the static IP logic in k8s itself if its outsourcing all the job of setting up network interfaces to plugins. |
The way I would like to solve it is by defining a staitc-ip-subnet (just like podCIDR assigned to nodes), from which we'd draw these limited ips and assign them to pods. If a pod with the limited edition ip dies, we reprogram the routes so traffic flows to whichever node it lands, just like we setup routes today to route podCIDR to specific nodes. The easier way to solve static ip is through a Service vip, but I don't like that for a couple of reasons (occupies iptables space, requires Svc per pod, wont work cross kube-cluster). |
@bprashanth: and how would your solution work with plugins? Would it simply sent this desired IP to the plugin, or will it take some of the plugin's responsibilities? |
network plugins are responsible for allocating ips from a given range today, not between nodes. This range is the pod cidr. Something assigns podCIDRs and setups up routing (whatever that may be, it isn't a plugin yet. It could be CP specific route controller or something like flannel). The plugin will only be responsible for eg: creating a veth with the allocated ip and shoving in the netns. IPAM itself is a plugin within the CNI plugin. |
init-containers also shares the same network with the whole pod, so why not get the sticky ips in the init-containers? |
As for JBoss projects based on JGroups (like Infinispan for example), we probably need to write a new discovery protocol based on DNS (I proposed it on Infinispan dev mailing list and waiting for a response). Currently some of us use KUBE_PING (which queries Kubernetes API and collects containers) but trusting DNS would probably be a much better option. However we (the Infinispan Team) would be very interested in exposing PetSets to the outside world. Our Hot Rod client can take the advantage of topology information and optimize queries. Having a public Sticky IPs (or anything that let's client decide to which Pod the request should be forwarded) would be very important for us. |
@thockin this is the one I meant to comment on. Who is setting the priority of this one? Would a proposal be a good start? |
A proposal is always a good start |
@chrislovecnm are you writing this proposal ? |
@krmayankk not had any time ... And frankly found a work around ish for Cassandra |
@bprashanth There are no sig labels on this issue. Please add a sig label by: |
/sig network |
Hi @chrislovecnm, we've having problems running Cassandra in stateful sets due to this - while we're waiting for a solution, could you share your workaround? Thanks! |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Can you please share your work around, I am facing a similar issue with cassandra in statefulset. |
@vadalikrishna @krmayankk any information on the workaround would be much appreciated |
@krmayankk @vadalikrishna chrislovecnm : any info how you resolved this issue |
That's why if someone like @chrislovecnm has a workaround it would be much appreciated to know it.
as you know, the issue is we can't assign different annotations to nodes belonging to a StatefulSet |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Did anyone find a solution for this? |
You can force podA to stay on node1 using local storage and persistent volume claim |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Can we reopen this? This is still very relevant for many workloads that now have issues running on kubernetes manually. Including Audio/Video (STUN/TURN etc), redis and cassandra. |
It would be really cool if we could add an annotation to the |
Can we have this issue reopened? |
Most of the databases I (https://github.com/kubernetes/kubernetes/tree/master/test/e2e/testing-manifests/petset) and others (#28718 (comment)) have prototyped seem to handle DNS properly, but there are murmurs that some do not (#23828 (comment)), and dont' have plans to do so (#28718 (comment)).
I'd still vote for defering any implementation till we have the end to end models fleshed out. One might imagine that databases understand the importance of DNS ttl.
The text was updated successfully, but these errors were encountered: