New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get rid of socat #191
Comments
Interesting. I've been very cautious about introducing more iptables magic which is why I went with the socat approach in the first place. I've also considered doing even more magic in the networking part, see #153. Just to make sure I get things correctly, the iptables rule is added inside the docker container, right? I don't have to run that command outside of it, right? (I don't see how that could work since the IP should be local to the container namespace.. |
yeah, that goes inside the container. I just thought that if one line can rip and replace a few dozen lines it'll make things easier. You also won't need the default route to 10.0.0.1 since your source address will already be part of the 10.0.0/24 range regarding your other networking magic, I've tried that approach as well (stealing ip off eth0 and moving it into a VM) and it works perfectly well. however one thing to keep in mind is K8s (and potentially other container orchestrators) determine healthiness of a pod by pinging its IP address. So what happens is between the time you steal the IP and the time VM fully boots, K8s determines that CNI plugin has failed and tears down/recreates the pod. So I ended up sticking with qemu user networking for now, until i find a better way. |
I wonder @networkop how does 10.0.0.2 gets exposed to the container netns? it is a qemu usernetwork, how can iptables get to know about this range? |
@hellt I don't remember the details now, may be qemu is creating a device with this ip? |
qemu doesn't, hence I was quite surprised to stumble upon that issue. seems like you have solved it though, so I am trying to recollect how it was supposed to work. |
@hellt and @networkop Im trying to figure the samething out like who is exposing 10.0.0.2. If you guys remember , please let me know. |
@udaykiran-chava I think this is hard-coded in qemu, so it's a qemu process |
Thanks @networkop |
@udaykiran-chava would you be able to update this issue with your findings after testing this? I would very much like to get rid of socat for containerlab-friendly images, as this simplifies exposed ports provisioning |
@hellt I had a bare debian stretch container all qemu,iptables etc.. packages installed. qemu-system-x86_64 -enable-kvm -display none -machine pc -monitor tcp:0.0.0.0:4000,server,nowait -m 4096 -serial telnet:0.0.0.0:5000,server,nowait -drive if=ide,file=/csr1000v-universalk9.17.03.03-overlay.qcow2 -device pci-bridge,chassis_nr=1,id=pci.1 -device virtio-net-pci,netdev=p00,mac=52:54:00:44:9b:00 -netdev user,id=p00,net=10.0.0.0/24,tftp=/tftpboot,hostfwd=tcp::22-10.0.0.15:22 -device virtio-net-pci,netdev=p01,mac=52:54:00:cb:10:01,bus=pci.1,addr=0x2 -netdev socket,id=p01,listen=:10001 I dont have any socat , so ssh to container ip was not connecting to VR. Later adding the iptable rule , im able to ssh to vr from outside on 22. |
@udaykiran-chava that's great! Can you maybe also check if the opposite communication path works (from inside of VM to external IP)? |
The outbound traffic from VR didnt work. I think we have to setup one more iptables rule for same. |
I've noticed you're doing connection stitching with socat due to issues with qemu user networking. I've come across a similar issue and it looks like it can be solved with just one command:
With this one, I can connect to hostfwd port from outside the container
The text was updated successfully, but these errors were encountered: