-
Notifications
You must be signed in to change notification settings - Fork 581
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Only Able to Reach NodePort Services Externally On Node Pod Schedules On #485
Comments
I've done a bit more digging, and as best I can tell, the culprit is Docker's policy of setting the I've confirmed that each time I attempt $ sudo iptables -t filter -v --line-numbers -L FORWARD
Chain FORWARD (policy DROP 3 packets, 188 bytes)
num pkts bytes target prot opt in out source destination
1 76887 119M cali-FORWARD all -- any any anywhere anywhere /* cali:wUHhoiAYhphO9Mso */
2 15 936 KUBE-FORWARD all -- any any anywhere anywhere /* kubernetes forward rules */
3 13 808 DOCKER-ISOLATION all -- any any anywhere anywhere
4 0 0 DOCKER all -- any docker0 anywhere anywhere
5 0 0 ACCEPT all -- any docker0 anywhere anywhere ctstate RELATED,ESTABLISHED
6 0 0 ACCEPT all -- docker0 !docker0 anywhere anywhere
7 0 0 ACCEPT all -- docker0 docker0 anywhere anywhere
7 0 0 ACCEPT all -- docker0 docker0 anywhere anywhere I found an issue opened just 2 days ago in Project Calico that confirms this is a known problem: projectcalico/calico#1840 This is an upstream bug in Kubernetes: kubernetes/kubernetes#59656 Fix waiting to be merged: kubernetes/kubernetes#62007 Since this isn't an RKE bug feel free to close, but I left the issue open for now in case anyone at Rancher wants to update docs or at least be aware if people ask on Slack. |
@frankhinek Thank you for detailed report. We will keep the issue open to and track the upstream issues until they are resolved. |
Same problem here! |
As this was was a k8s issue tht'st now fixed. I"ll close this for now. |
I've deployed a new K8s cluster using RKE and Calico networking. I've discovered that I am only able to access type NodePort services on the node the pod is scheduled to. If I try access the service from any other node it fails. I've confirmed that I am able to access the service while SSH'd to any one of the nodes, so the problem is only exhibited when attempting to access services from outside the cluster.
I tried asking on the #rke and #general channels on Slack but didn't get a response, so I'm posting here in hopes that it might be some bug or mistake on my part that can be identified.
RKE version:
v0.1.5-rc2
Docker version: (
docker version
,docker info
preferred)$ docker version Client: Version: 17.03.2-ce API version: 1.27 Go version: go1.7.5 Git commit: f5ec1e2 Built: Tue Jun 27 02:21:36 2017 OS/Arch: linux/amd64 Server: Version: 17.03.2-ce API version: 1.27 (minimum version 1.12) Go version: go1.7.5 Git commit: f5ec1e2 Built: Tue Jun 27 02:21:36 2017 OS/Arch: linux/amd64 Experimental: false
Operating system and kernel: (
cat /etc/os-release
,uname -r
preferred)Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)
virtual machine running on VMware vSphere cluster
cluster.yml file:
Steps to Reproduce:
curl 10.224.88.178:32745/status/418
command against the node the pod is scheduled on and observe the response.curl 10.224.88.85:32745/status/418
command against a different node from the one the pod is scheduled on and observe the response.curl localhost:32745/status/418
command.Results:
The response to
curl
against the node the pod is scheduled on is:whereas for the other node it is:
but when logged in via SSH to the 10.224.88.85 node the
curl
succeeds:The text was updated successfully, but these errors were encountered: