New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube-proxy in ipvs mode does not synchronize ipvs status correctly #55854
Comments
Thanks for reporting it. /assign |
Will take a deep look... |
I WOULD suggest you check the head of master branch. There is a known issue in v1.8.0. Seems your issue if very similar to #52393 |
Is there a released version that solves #52393 and possibly solve this? |
I would suggest you try ipvs proxy in v1.9 since it still has some known issues now - we target beta in v1.9. |
/area ipvs |
I believe it's fixed in v1.9 I am going to close this issue now. Please re-open if it still persist. |
/close |
@m1093782566 I have similar problem for kubernetes v1.11.3 Do you know what the best way to debug it? My situation: On one node I see the service and pod interfaces: 1.1.1.1# ip addr show | grep 10.244.254
inet 10.244.254.0/32 scope global flannel.1
1.1.1.1# ip addr show | grep 10.101.80.23
inet 10.101.80.23/32 brd 10.101.80.23 scope global kube-ipvs0 and in same time I don't get it from another node neither service or pod. 2.2.2.2# ip addr show | grep 10.244.254
#
2.2.2.2# ipvsadm -ln
...
TCP 10.101.80.23:8080 rr
-> 10.244.254.12:8080 Masq 1 0 0 kube-proxy logs for 2.2.2.2:
How can I debug the issue? |
have similar problem for kubernetes v1.12.2 Do you know what the best way to debug it? |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
/sig network
/area kube-proxy
What happened:
We run a kubernetes cluster using k8s 1.8.0 and kube-proxy running in ipvs mode. Services created/updated after kube-proxy(pod) started running are not synchronized in ipvs, but services deleted before are still being synced.
What you expected to happen:
The new services are synced to ipvs, but the old ones are not synced.
How to reproduce it (as minimally and precisely as possible):
0. run k8s 1.8.0 cluster with kube-proxy run in ipvs mode
svc-1
and get its clustetIPipvsadm -ln
, we cannot see records for the clusterIP after a very long time(>5min),ipvsadm -ln
.Anything else we need to know?:
args for running kube-proxy:
logs of kube-proxy(pod):
note: services
svc-test/svc-*
are deleted.currently active services & endpoints:
note: service
default/nginx-1
is created after last restart of kube-proxy on this node.output of
ipvsadm -ln
:Environment:
kubectl version
): 1.8.0uname -a
): 4.4.0-72-generic x64The text was updated successfully, but these errors were encountered: