-
Notifications
You must be signed in to change notification settings - Fork 2.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vxlan_network.go:158] failed to add vxlanRoute (10.244.0.0/24 -> 10.244.0.0): invalid argument #959
Comments
I don't see the flannel.1 link on the agent node. Is that the issue? Why is this link not being created? Is it due to vxlan_network.go:158] failed to add vxlanRoute (10.244.0.0/24 -> 10.244.0.0): invalid argument? Also if that's the case when why am I able to communicate with the kubernetes.default via 10.96.0.1 over 443? If so what is causing this? Appreciate any help. |
So if anyone is interested or has this issue I was able to get past this issue by kubectl delete node k8s-master and recreating it. This allocated a different node cidr subnet 10.244.1.0/24 instead of 10.244.0.0/24 which appears to conflict. All of the links were created but I am still having an issue with the service interface. [root@k8s-agent2 ~]# nslookup kubernetes.default.svc.cluster.local 10.244.1.2
Server: 10.244.1.2
Address: 10.244.1.2#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
[root@k8s-agent2 ~]# nslookup kubernetes.default.svc.cluster.local 10.96.0.10
;; connection timed out; trying next origin
;; connection timed out; no servers could be reached
[root@k8s-master ~]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 2d
[root@k8s-master ~]# kubectl get ep -n kube-system
NAME ENDPOINTS AGE
kube-controller-manager <none> 2d
kube-dns 10.244.1.2:53,10.244.1.2:53 2d
kube-scheduler <none>
[root@k8s-agent2 ~]# iptables-save | grep kube-dns
-A KUBE-SEP-BWHGELGX6BITPZVO -s 10.244.1.2/32 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-MARK-MASQ
-A KUBE-SEP-BWHGELGX6BITPZVO -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp" -m tcp -j DNAT --to-destination 10.244.1.2:53
-A KUBE-SEP-Z6M7ZHWCTBNMPLD7 -s 10.244.1.2/32 -m comment --comment "kube-system/kube-dns:dns" -j KUBE-MARK-MASQ
-A KUBE-SEP-Z6M7ZHWCTBNMPLD7 -p udp -m comment --comment "kube-system/kube-dns:dns" -m udp -j DNAT --to-destination 10.244.1.2:53
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m comment --comment "kube-system/kube-dns:dns-tcp" -j KUBE-SEP-BWHGELGX6BITPZVO
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m comment --comment "kube-system/kube-dns:dns" -j KUBE-SEP-Z6M7ZHWCTBNMPLD7
[root@k8s-agent2 ~]# ip route
default via 10.244.0.1 dev eth0 proto static metric 100
10.244.0.0/16 dev eth0 proto kernel scope link src 10.244.0.6 metric 100
10.244.1.0/24 via 10.244.1.0 dev flannel.1 onlink
10.244.2.2 dev cali4b8ffe82a2b scope link
10.244.2.4 dev cali10256f09271 scope link
10.244.2.5 dev cali3ac1a873578 scope link
10.244.3.0/24 via 10.244.3.0 dev flannel.1 onlink
168.63.129.16 via 10.244.0.1 dev eth0 proto dhcp metric 100
169.254.169.254 via 10.244.0.1 dev eth0 proto dhcp metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
Anybody see anything wrong with my rules here? |
I figured it out. |
@slecrenski Can you please share what did you do solve this problem? I am having the same problem. |
@slecrenski Yes. I get the same problem. Could you please share how to solve the problem? |
In my case I had a tunl on the node with ip listed in the error log. So manually deleted the ip on tunl and restarted flannel pod. |
what do you do that?i want to do as you,too,but i can't,can you tell me ? |
I find a quick way to solve this problem. problem:
Solustion: log
correct output
|
What worked for me was deleting both |
I got the same issue; Somebody can tell us the root cause of this issue, CIDR conflict or something else? Thanks. |
Docker: 1.12.6
RHEL: 7.3
Linux k8s-master 3.10.0-693.21.1.el7.x86_64 #1 SMP Fri Feb 23 18:54:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Kubernetes 1.9.3
quay.io/calico/node:v2.6.2
quay.io/calico/cni:v1.11.0
quay.io/coreos/flannel:v0.9.1
Azure Cloud with vnet address space: 10.244.0.0/16
Cluster was initialized with
Cluster is running in Azure with the same virtual network as the pod cidr.
[root@k8s-master v2]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE canal-9sfh5 3/3 Running 0 1h 10.244.0.4 k8s-agent1 canal-jmgzn 3/3 Running 0 1h 10.244.0.100 k8s-master etcd-k8s-master 1/1 Running 0 2h 10.244.0.100 k8s-master kube-apiserver-k8s-master 1/1 Running 0 2h 10.244.0.100 k8s-master kube-controller-manager-k8s-master 1/1 Running 0 2h 10.244.0.100 k8s-master kube-dns-6f4fd4bdf-ch98b 3/3 Running 0 2h 10.244.0.24 k8s-master kube-dns-6f4fd4bdf-dsjtq 1/3 CrashLoopBackOff 48 1h 10.244.3.2 k8s-agent1 kube-proxy-x6j8p 1/1 Running 0 2h 10.244.0.100 k8s-master kube-proxy-z5bbd 1/1 Running 0 1h 10.244.0.4 k8s-agent1 kube-scheduler-k8s-master 1/1 Running 0 2h 10.244.0.100 k8s-master
I have a very basic configuration. 1 Master Node and 1 Agent Node. DNS queries are not working on the agent node. the kube-dns is running on the master node. Master Node IP 10.244.0.100 and Agent Node IP: 10.244.0.4.
I am trying to figure out why it is that I cannot communicate with 10.96.0.10 (kube-dns) which is supposed to be routed to the master node (where kube-dns is running).
I've been looking at log files and enabling level 10 verbosity for the past several hours. What does this error message mean?
vxlan_network.go:158] failed to add vxlanRoute (10.244.0.0/24 -> 10.244.0.0): invalid argument
I am unable to get pods that require kube-dns to run. They just fail with a dns error trying to perform a lookup to kubernetes.default.svc.cluster.local. If i try to scale kube-dns to launch on the non-master node the kube-dns fails to start on that node due to an issue with dns lookup.
I am unable to get pod-to-kube-dns and node->kube-dns communication working. How can I debug what the issue is?
These are RHEL 7.3 nodes with:
Master Node:
Slave Node:
Interestingly enough I can do things like this
Slave Node Iptables:
Anyway to use tcpdump to figure out this issue?
What does this error mean? vxlan_network.go:158] failed to add vxlanRoute (10.244.0.0/24 -> 10.244.0.0): invalid argument
Master is running at 10.244.0.100 and agent node is running at 10.244.0.4.
--master
--agent
The text was updated successfully, but these errors were encountered: