Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get rid of socat #191

Open
networkop opened this issue Mar 12, 2019 · 12 comments
Open

Get rid of socat #191

networkop opened this issue Mar 12, 2019 · 12 comments

Comments

@networkop
Copy link

I've noticed you're doing connection stitching with socat due to issues with qemu user networking. I've come across a similar issue and it looks like it can be solved with just one command:

iptables -t nat -A INPUT -j SNAT --to-source 10.0.0.2

With this one, I can connect to hostfwd port from outside the container

@plajjan
Copy link
Collaborator

plajjan commented Apr 2, 2019

Interesting. I've been very cautious about introducing more iptables magic which is why I went with the socat approach in the first place. I've also considered doing even more magic in the networking part, see #153.

Just to make sure I get things correctly, the iptables rule is added inside the docker container, right? I don't have to run that command outside of it, right? (I don't see how that could work since the IP should be local to the container namespace..

@networkop
Copy link
Author

yeah, that goes inside the container. I just thought that if one line can rip and replace a few dozen lines it'll make things easier. You also won't need the default route to 10.0.0.1 since your source address will already be part of the 10.0.0/24 range

regarding your other networking magic, I've tried that approach as well (stealing ip off eth0 and moving it into a VM) and it works perfectly well. however one thing to keep in mind is K8s (and potentially other container orchestrators) determine healthiness of a pod by pinging its IP address. So what happens is between the time you steal the IP and the time VM fully boots, K8s determines that CNI plugin has failed and tears down/recreates the pod. So I ended up sticking with qemu user networking for now, until i find a better way.

@hellt
Copy link

hellt commented Sep 7, 2021

I wonder @networkop how does 10.0.0.2 gets exposed to the container netns? it is a qemu usernetwork, how can iptables get to know about this range?

@networkop
Copy link
Author

@hellt I don't remember the details now, may be qemu is creating a device with this ip?

@hellt
Copy link

hellt commented Sep 7, 2021

qemu doesn't, hence I was quite surprised to stumble upon that issue. seems like you have solved it though, so I am trying to recollect how it was supposed to work.

@udaykiran-chava
Copy link

@hellt and @networkop Im trying to figure the samething out like who is exposing 10.0.0.2. If you guys remember , please let me know.

@networkop
Copy link
Author

@udaykiran-chava I think this is hard-coded in qemu, so it's a qemu process
https://github.com/qemu/qemu/blob/9de5f2b40860c5f8295e73fea9922df6f0b8d89a/net/slirp.c#L420

@udaykiran-chava
Copy link

Thanks @networkop

@hellt
Copy link

hellt commented May 26, 2022

@udaykiran-chava would you be able to update this issue with your findings after testing this?
First thing I notice that IP is 10.0.2.2 so maybe there was a typo in the original message

I would very much like to get rid of socat for containerlab-friendly images, as this simplifies exposed ports provisioning

@udaykiran-chava
Copy link

@hellt
iptables -t nat -A INPUT -j SNAT --to-source 10.0.0.2 This worked, so that socat rules can be bypassed.

I had a bare debian stretch container all qemu,iptables etc.. packages installed.
I started a cisco xe vm by below cmd

qemu-system-x86_64 -enable-kvm -display none -machine pc -monitor tcp:0.0.0.0:4000,server,nowait -m 4096 -serial telnet:0.0.0.0:5000,server,nowait -drive if=ide,file=/csr1000v-universalk9.17.03.03-overlay.qcow2 -device pci-bridge,chassis_nr=1,id=pci.1 -device virtio-net-pci,netdev=p00,mac=52:54:00:44:9b:00 -netdev user,id=p00,net=10.0.0.0/24,tftp=/tftpboot,hostfwd=tcp::22-10.0.0.15:22 -device virtio-net-pci,netdev=p01,mac=52:54:00:cb:10:01,bus=pci.1,addr=0x2 -netdev socket,id=p01,listen=:10001

I dont have any socat , so ssh to container ip was not connecting to VR. Later adding the iptable rule , im able to ssh to vr from outside on 22.

@hellt
Copy link

hellt commented Feb 10, 2023

@udaykiran-chava that's great! Can you maybe also check if the opposite communication path works (from inside of VM to external IP)?

@udaykiran-chava
Copy link

The outbound traffic from VR didnt work. I think we have to setup one more iptables rule for same.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants