-
Notifications
You must be signed in to change notification settings - Fork 38.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to update k8s server IP address #88648
Comments
kubernetes is not tolerant to "master/server IP" changes as there are certificates at play which are aware of the IP. there was a long discussion here kubernetes/kubeadm#338 kubeadm does not have a way to change the IP for you and there are no plans to support this. in terms of what is the right thing to do? /sig cluster-lifecycle network |
@neolit123: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This is a false closure. The fact that you COULD use a control-plane-endpoint HOSTNAME (not an IP) will NOT save you AT ALL from the control plane failing when the IPs of the master nodes change. The IPs of the nodes are hardcoded everywhere. This is a 30 years old problem that Kubernestes should address: use hostnames and not IPs in the config for the control and worker plane's network references... I am surprised this has not been addressed already. This basically precludes portability of a K8 system |
the kubeadm docs at least tell you that you should be careful with IPs and consider DNS names:
can you enumerate the locations where an IP of a control-plane is hardcoded? |
The network ips may change based on many factors.The point you mentioned is not valid. There should be some fix.
You can't expect to change environment everytime ip changes.it should be redirected based on server name instead of ip.
Get Outlook for Android<https://aka.ms/ghei36>
…________________________________
From: Lubomir I. Ivanov <notifications@github.com>
Sent: Thursday, March 4, 2021 7:09:37 PM
To: kubernetes/kubernetes <kubernetes@noreply.github.com>
Cc: Uday Kiran Reddy <ukreddy@erwin.com>; Mention <mention@noreply.github.com>
Subject: Re: [kubernetes/kubernetes] How to update k8s server IP address (#88648)
@FerminCastro<https://github.com/FerminCastro>
the kubeadm docs at least tell you that you should be careful with IPs and consider DNS names:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#considerations-about-apiserver-advertise-address-and-controlplaneendpoint
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#88648 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AFCBXBLZOROWN7WN7ZHUNQTTB6ERTANCNFSM4K5KMYKQ>.
|
The point is that even if you use a controplaneenpoint such as a virtual front end load balancer. The original IPS will still appear in a number of places. For example kubeadm-config: [opc@olk8-m1 ~]$ kubectl -n kube-system describe cm kubeadm-config | grep advertise Why can't k8 use hostnames for all those? |
these come from the kubeadm ClusterStatus which is deprecated and no longer used. any other examples? |
not sure what you mean by "server name", but it has to be tracked somewhere k8s components communicate to each other with kubeconfig files, which allow DNS names. one exception is etcd since it needs a list of IPs passed to the kube-apiserver, but you could still shield an etcd cluster behind a load-balancer IP, which hopefully doesn't change. to get wider attention to this IP change, problem you can open a discussion with the SIG Architecture group mailing list |
The etcd config itself for example. We NEVER EVER use any IPs in any of our configuration steps. However, the default etcd.yaml has the nodes Ip everywhere. We had to change the listen-peer and listen-client to use 0.0.0.0 (which will break in a number situations with multi-nic system) because there is no way to use hostnames there either... |
is your request to be able to pass DNS name instead of but there is a core feature request here too, since the kube-apiserver flag:
doesn't support DNS names. |
And as many people has noted in other threads, most operations will fail with Unauthorized exceptions because the certs are invalid. for example, let me share a common op for us: move an etcd snapshot from one location to another where we use THE EXACT same hostnames. Things should JUST work. Give it a try :-), flannel breaks, coredns breaks, etcd breaks... |
"is your request to be able to pass DNS name instead of https://192.168.0.101?" We would like to avoid this:
|
your best option is to add nodes from a new network and remove nodes from the old network. then only swap the LB endpoint. patch coredns / CNI whatever needs it. this is not only a kubeadm problem as the kubelet has
for kubeadm you could do this as a workaround:
but again, this is too complicated and unsupported to have a magical fix for. |
"your best option is to add nodes from a new network and remove nodes from the old network. then only swap the LB endpoint. patch coredns / CNI whatever needs it." Thanks but there are many use cases where this is just not possible. We may be in a totally different system where we want to place the exact same K8 config we had in a test environment. Or move it to a different DC where we have aliased the hostnames properly. This is a problem that was solved for practically every type of IT system long time ago... Do not attach your infrastructure and apps to any specific IPs. great that the app layer in K8 is allowing that, but the control plane itself still does NOT. |
like i've mentioned, you can complain to https://github.com/kubernetes/community/tree/master/sig-architecture#contact for kubeadm we could adapt things like this to be IP or DNS name: which will allow you to have a DNS name in certs and the etcd.yaml, but the problem here is the kube-apiserver.
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/ there are a number of settings in k8s that are IP only. |
Thanks, this is something that definitely needs to be revisited and fixed. We are finding ourselves having to move pretty complex control plane configurations to other locations and these configurations (control plane things like spread topologies, labels etc) should all be totally portable (i.e. their etcd snapshot) provided we keep hostnames consistent... |
By the way, and stickig ONLy to kubeadm. Maybe you can helps us describing how the coredns secret: [opc@olk8-m1 ~]$ k get secret -A | grep coredns is created by kubeadm. Whenever we restore a etcd snapshot on a different node (with the same hostanme) we are forced to entirely redeploy cordens sudo kubeadm init phase addon coredns because the coredns pods keep failing with "0305 09:26:54.910414 1 reflector.go:153] pkg/mod/k8s.io/client-go@v0.17.2/tools/cache/reflector.go:105: Failed to list *v1.Service: Unauthorized" messages. But if we delete the coredns deployment and the secret and recreate it with kubeadm init phase addon coredns, things work (unfortunately this is a pain because we use spread topology for coredns and we need to apply it again after each restore) Thanks for the help |
i have not explanation for this, but it's not a kubeadm quirk. maybe someone at #sig-api-machinery or #sig-auth at k8s slack knows why. the kubeadm coredns related objects are in this file: |
Not so sure that this is not a kubeadm issue. Because the original coredns was created with kubeadm. So somehow the secret and cluster role Binding get invalidated in the second location so seems like kubeadm is generating those with some inappropriate dependency. Again, it would help tremendously finding out how kubeadm is generating the secret and cluster role binding for coredns |
The link from my previous comment is all that kubeadm is doing when
deploying coredns. The cluster role, binding and service acc are in there,
but the secret is managed by k8s controllers.
|
The IP address of the k8s master should be updated as we moved to a different network.How to update that instead of completely resetting the kubeadm. As there are many nodes, it is pathetic to join them again.
The text was updated successfully, but these errors were encountered: