Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] If default FORWARD policy is DROP, vm use vlan 1 network (vid=1) can't access other host #841

Open
futuretea opened this issue May 10, 2021 · 4 comments
Assignees
Labels
area/dev Dev related tasks area/network internal kind/bug Issues that are defects reported by users or that we know have reached a real release priority/1 Highly recommended to fix in this release require/doc Improvements or additions to documentation
Milestone

Comments

@futuretea
Copy link
Contributor

Describe the bug

To Reproduce
Steps to reproduce the behavior:

  1. make sure ip_forward is disabled
  2. install docker (docker will enable ip_forward and change FORWARD policy to DROP)
  3. setup up k8s cluster and harvester

Expected behavior

if FORWARD policy is DROP, vm use vlan 1 can access other host

workaround

iptables -A FORWARD -i harvester-br0 -j ACCEPT
iptables -A FORWARD -o harvester-br0 -j ACCEPT

Support bundle

Environment:

  • Harvester ISO version: NO ISO, appmode
  • Underlying Infrastructure (e.g. Baremetal with Dell PowerEdge R630):

Additional context
Add any other context about the problem here.
moby/moby#28257
moby/moby#14041
https://github.com/moby/libnetwork/blob/b5dc37037049d9b9ef68a3c4611e5eb1b35dd2af/drivers/bridge/setup_ip_forwarding.go#L32

@futuretea futuretea added kind/bug Issues that are defects reported by users or that we know have reached a real release area/network area/dev Dev related tasks labels May 10, 2021
@futuretea futuretea added this to the v0.3.0 milestone May 10, 2021
@yaocw2020
Copy link
Contributor

To test:

  • Set up a Harvester with two hosts, enable the VLAN network and create VLAN 1 network.

  • Create two VMs within VLAN 1 in different hosts.

  • Login the host through ssh and execute the shell command iptables -P FORWARD DROP.

  • Ping between two VMs and we will get ping failures.

  • Upgrade harvester-network-controller to the master head version

  • Login the host through ssh and execute the shell iptables -S FORWARD. We should see the following iptables rules.

    iptables -A FORWARD -i harvester-br0 -j ACCEPT
    iptables -A FORWARD -o harvester-br0 -j ACCEPT
    
  • Ping between two VMs again and it will be successful.

@guangyee
Copy link

guangyee commented Oct 4, 2021

Does not appear this is fixed. Here's what I did in my vagrant 3-node Harvester cluster.

  1. Create two VMs, one on Harvester node 1 and the other on node 2. Both with the second NIC using VLAN 1 network.
  2. Login to the first VM and make sure it is able to ping the the second VM via the VLAN IP.
  3. Login to node 1 and execute command iptables -S FORWARD and noticing that the policies for harvester-br0 does not exist.
harvester-node-1:~ # iptables -S FORWARD
-P FORWARD ACCEPT
-A FORWARD -m comment --comment "cali:wUHhoiAYhphO9Mso" -j cali-FORWARD
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A FORWARD -s 10.52.0.0/16 -j ACCEPT
-A FORWARD -d 10.52.0.0/16 -j ACCEPT
  1. Execute command iptables -S FORWARD and verified that VMs are no longer pingable between each other.
  2. Execute commands
iptables -A FORWARD -i harvester-br0 -j ACCEPT
iptables -A FORWARD -o harvester-br0 -j ACCEPT
  1. Verify that the VMs are no pingable between each other.

@yasker yasker modified the milestones: v0.3.0, v1.0.0 Oct 4, 2021
@yasker
Copy link
Member

yasker commented Oct 4, 2021

@futuretea Why are we setting up the iptables rules in this way? I saw the label dev-mode so are there any special requirements regarding this issue?

@guangbochen guangbochen modified the milestones: v1.0.0, v1.1.0 Oct 21, 2021
@futuretea
Copy link
Contributor Author

futuretea commented Nov 23, 2021

@yasker Just want to ensure that the cluster installed through Docker also works properly. Docker will change the default FORWARD policy to DROP, so we need to add these iptables rules to make sure the packet forward always works in harvester-br0.

@yasker yasker modified the milestones: v1.1.0, v1.0.1 Dec 8, 2021
@guangbochen guangbochen added the require/doc Improvements or additions to documentation label Jan 4, 2022
@guangbochen guangbochen assigned futuretea and unassigned yaocw2020 Jan 4, 2022
@rebeccazzzz rebeccazzzz added the priority/1 Highly recommended to fix in this release label Feb 24, 2022
@rebeccazzzz rebeccazzzz modified the milestones: v1.0.1, v1.0.2 Mar 8, 2022
@rebeccazzzz rebeccazzzz modified the milestones: v1.0.2, Planning Apr 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/dev Dev related tasks area/network internal kind/bug Issues that are defects reported by users or that we know have reached a real release priority/1 Highly recommended to fix in this release require/doc Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

6 participants