-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PR for implementation #1
Conversation
Vagrantfile
Outdated
master.vm.hostname = $master_hostname | ||
master.vm.network "private_network", ip: $master_ip | ||
master.vm.synced_folder "config", "/config" | ||
master.vm.provider "virtualbox" do |v| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@webvictim would you be able to also add libvirt
as a provider, please?
Many of us tend to stick to that as opposed to use virtualbox
, nowadays.
Thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@eldios Sure - I'm just testing out the changes as I've had to use a different base image for libvirt. It's also hard to make libvirt work on a Mac so I've had to set it up on a Linux machine. I should be able to commit the updated code shortly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks.
@eldios The Vagrantfile should work with libvirt now. |
README.md
Outdated
vagrant@kube-master:~$ kubectl get nodes | ||
NAME STATUS ROLES AGE VERSION | ||
kube-master NotReady master 8m v1.10.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the exercise specification it was not specified to use a different node as a master.
We're interested in knowing your ideas in going down this path.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this was more of an oversight than a deliberate decision. I've changed it now so that there are three nodes total and the master runes on the first node.
README.md
Outdated
vagrant@kube-node1:~$ ping 10.128.3.4 | ||
PING 10.128.3.4 (10.128.3.4) 56(84) bytes of data. | ||
64 bytes from 10.128.3.4: icmp_seq=1 ttl=63 time=0.436 ms |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are you not using IPSEC for the master node?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am now!
Hey Gus, I've been doing some testing, and I found a sort of interesting issue. It looks to me like pod-to-pod traffic is bypassing the ipsec tunnels. If you look at
The way I'm testing, is just using docker exec to ping a pod IP on another host:
Also, tcpdump appears to pick up the traffic in the clear
Cheers, |
@knisbet Thanks for the feedback and the detail. I've switched over to using proper routed IPSEC now with tunnels and marking the traffic explicitly as you suggested. My testing seems to show that this has fixed the issue and all the traffic is now being properly encrypted - the counters increase and I see ESP packets on the wire rather than clear ICMP. |
@webvictim I still don't get why you're not using the VPN for all the kubernetes requests while you still address the 192.168.64.10:6443 directly. |
@eldios There are a couple of reasons I guess:
With that said, we're not verifying the master's CA hash here so in theory a MITM attack would be possible. To counter this, we could modify the bootstrapping process to scrape the CA hash when it's output by kubeadm, pass it to the nodes somehow and verify it when they join. I didn't do this because it seemed a bit unreliable and messy - there's an open issue against kubeadm to provide a way to do this properly (kubernetes/kubeadm#659) I think I also thought that it'd be much tougher to get Kubernetes to establish a cluster over the tunnels but actually, I just did it and it was really simple... In terms of what I think would be better, on reflection it's probably better to just put everything inside the IPSEC tunnels as they could potentially be in a situation where the master isn't on the same network as the nodes, isn't routable or maybe the network isn't trusted. What do you think? |
I also think that it's cleaner to put everything inside IPSEC but the fact that the communication is encrypted via HTTPs makes it less mandatory. |
I've added a commit to send all Kubernetes communication over the IPSEC tunnels now as it's tidier. |
No description provided.