Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PR for implementation #1

Merged
merged 10 commits into from
Apr 4, 2018
Merged

PR for implementation #1

merged 10 commits into from
Apr 4, 2018

Conversation

webvictim
Copy link
Owner

No description provided.

Vagrantfile Outdated
master.vm.hostname = $master_hostname
master.vm.network "private_network", ip: $master_ip
master.vm.synced_folder "config", "/config"
master.vm.provider "virtualbox" do |v|
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@webvictim would you be able to also add libvirt as a provider, please?
Many of us tend to stick to that as opposed to use virtualbox, nowadays.
Thanks

Copy link
Owner Author

@webvictim webvictim Apr 3, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@eldios Sure - I'm just testing out the changes as I've had to use a different base image for libvirt. It's also hard to make libvirt work on a Mac so I've had to set it up on a Linux machine. I should be able to commit the updated code shortly.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks.

@webvictim
Copy link
Owner Author

@eldios The Vagrantfile should work with libvirt now.

README.md Outdated
vagrant@kube-master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master NotReady master 8m v1.10.0
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the exercise specification it was not specified to use a different node as a master.
We're interested in knowing your ideas in going down this path.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this was more of an oversight than a deliberate decision. I've changed it now so that there are three nodes total and the master runes on the first node.

README.md Outdated
vagrant@kube-node1:~$ ping 10.128.3.4
PING 10.128.3.4 (10.128.3.4) 56(84) bytes of data.
64 bytes from 10.128.3.4: icmp_seq=1 ttl=63 time=0.436 ms
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are you not using IPSEC for the master node?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am now!

@knisbet
Copy link

knisbet commented Apr 3, 2018

Hey Gus,

I've been doing some testing, and I found a sort of interesting issue. It looks to me like pod-to-pod traffic is bypassing the ipsec tunnels.

If you look at ipsec statusall the tunnel counters are all 0's.

   mesh-left[1]: ESTABLISHED 34 minutes ago, 192.168.64.10[192.168.64.10]...192.168.64.20[192.168.64.20]
   mesh-left[1]: IKEv2 SPIs: dd0ac2baaba214eb_i* 994afed63b77ec51_r, pre-shared key reauthentication in 16 minutes
   mesh-left[1]: IKE proposal: AES_CBC_128/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_2048
   mesh-left{7}:  INSTALLED, TUNNEL, reqid 1, ESP SPIs: cb1b31ac_i cd327d66_o
   mesh-left{7}:  AES_CBC_128/HMAC_SHA1_96, 0 bytes_i, 0 bytes_o, rekeying in 7 minutes
   mesh-left{7}:   10.128.1.0/24 === 10.128.2.0/24

The way I'm testing, is just using docker exec to ping a pod IP on another host:

docker exec -it 0027bfa14374 ping 10.128.3.2
PING 10.128.3.2 (10.128.3.2): 56 data bytes
64 bytes from 10.128.3.2: icmp_seq=0 ttl=62 time=0.557 ms
64 bytes from 10.128.3.2: icmp_seq=1 ttl=62 time=0.590 ms
64 bytes from 10.128.3.2: icmp_seq=2 ttl=62 time=0.620 ms
64 bytes from 10.128.3.2: icmp_seq=3 ttl=62 time=0.483 ms
64 bytes from 10.128.3.2: icmp_seq=4 ttl=62 time=0.844 ms
^C--- 10.128.3.2 ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.483/0.619/0.844/0.121 ms

Also, tcpdump appears to pick up the traffic in the clear

root@kube-node1:~# tcpdump -vni eth1 icmp
tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 262144 bytes
08:51:59.261402 IP (tos 0x0, ttl 63, id 647, offset 0, flags [DF], proto ICMP (1), length 84)
    192.168.64.10 > 10.128.3.2: ICMP echo request, id 48, seq 98, length 64
08:51:59.261952 IP (tos 0x0, ttl 63, id 45133, offset 0, flags [none], proto ICMP (1), length 84)
    10.128.3.2 > 192.168.64.10: ICMP echo reply, id 48, seq 98, length 64
08:52:00.263974 IP (tos 0x0, ttl 63, id 731, offset 0, flags [DF], proto ICMP (1), length 84)
    192.168.64.10 > 10.128.3.2: ICMP echo request, id 48, seq 99, length 64
08:52:00.264661 IP (tos 0x0, ttl 63, id 45161, offset 0, flags [none], proto ICMP (1), length 84)
    10.128.3.2 > 192.168.64.10: ICMP echo reply, id 48, seq 99, length 64
08:52:01.265630 IP (tos 0x0, ttl 63, id 914, offset 0, flags [DF], proto ICMP (1), length 84)
    192.168.64.10 > 10.128.3.2: ICMP echo request, id 48, seq 100, length 64
08:52:01.266334 IP (tos 0x0, ttl 63, id 45381, offset 0, flags [none], proto ICMP (1), length 84)
    10.128.3.2 > 192.168.64.10: ICMP echo reply, id 48, seq 100, length 64

Cheers,

@webvictim
Copy link
Owner Author

@knisbet Thanks for the feedback and the detail. I've switched over to using proper routed IPSEC now with tunnels and marking the traffic explicitly as you suggested. My testing seems to show that this has fixed the issue and all the traffic is now being properly encrypted - the counters increase and I see ESP packets on the wire rather than clear ICMP.

@eldios
Copy link

eldios commented Apr 4, 2018

@webvictim I still don't get why you're not using the VPN for all the kubernetes requests while you still address the 192.168.64.10:6443 directly.
Is there any specific reason? Do you think it's better to route traffic to the API server through the VPN or it's better to keep it outside of it? Could you explain your answer?
Thanks

@webvictim
Copy link
Owner Author

webvictim commented Apr 4, 2018

@eldios There are a couple of reasons I guess:

  • The project description doesn't explicitly say that API communication between nodes and master must be encrypted, it just covers pod-to-pod communications so I guess it's open to interpretation
  • API traffic between Kubernetes master and nodes is (AFAIK) already encrypted anyway so it didn't seem completely necessary to send it over the tunnels

With that said, we're not verifying the master's CA hash here so in theory a MITM attack would be possible. To counter this, we could modify the bootstrapping process to scrape the CA hash when it's output by kubeadm, pass it to the nodes somehow and verify it when they join. I didn't do this because it seemed a bit unreliable and messy - there's an open issue against kubeadm to provide a way to do this properly (kubernetes/kubeadm#659)

I think I also thought that it'd be much tougher to get Kubernetes to establish a cluster over the tunnels but actually, I just did it and it was really simple...

In terms of what I think would be better, on reflection it's probably better to just put everything inside the IPSEC tunnels as they could potentially be in a situation where the master isn't on the same network as the nodes, isn't routable or maybe the network isn't trusted.

What do you think?

@eldios
Copy link

eldios commented Apr 4, 2018

I also think that it's cleaner to put everything inside IPSEC but the fact that the communication is encrypted via HTTPs makes it less mandatory.
You're answer is more or less what I expected 👍

@webvictim
Copy link
Owner Author

I've added a commit to send all Kubernetes communication over the IPSEC tunnels now as it's tidier.

@webvictim webvictim merged commit 2d4118f into master Apr 4, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants