Skip to content
This repository has been archived by the owner on Feb 24, 2020. It is now read-only.

Exposed ports only accessible on localhost, even with "--port name:0.0.0.0:dport" #3886

Open
insidewhy opened this issue Dec 30, 2017 · 17 comments · May be fixed by #3887
Open

Exposed ports only accessible on localhost, even with "--port name:0.0.0.0:dport" #3886

insidewhy opened this issue Dec 30, 2017 · 17 comments · May be fixed by #3887

Comments

@insidewhy
Copy link

insidewhy commented Dec 30, 2017

Environment

rkt Version: 1.29.0
appc Version: 0.8.11
Go Version: go1.9
Go OS/Arch: linux/amd64
Features: -TPM +SDJOURNAL
--
Linux 4.14.5-1-ARCH x86_64
--
NAME="Arch Linux"
PRETTY_NAME="Arch Linux"
ID=arch
ID_LIKE=archlinux
ANSI_COLOR="0;36"
HOME_URL="https://www.archlinux.org/"
SUPPORT_URL="https://bbs.archlinux.org/"
BUG_REPORT_URL="https://bugs.archlinux.org/"
--
systemd 235
+PAM -AUDIT -SELINUX -IMA -APPARMOR +SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN2 +IDN default-hierarchy=hybrid

What did you do?

sudo rkt run --insecure-options=image,paths,capabilities --dns=127.0.0.1 --net=default --interactive --port=proxy:0.0.0.0:1080 some-aci.aci

The aci was built with:

acbuild port add proxy tcp 1080

I have zero firewall rules set up other than those that rkt sets up.

What did you expect to see?

Port 1080 should be bound to 0.0.0.0, I should be able to reach port 1080 from any host on my network.

What did you see instead?

I can only access port 1080 from localhost.

@insidewhy
Copy link
Author

insidewhy commented Dec 30, 2017

This used to work until about 9 months ago. I've googled furiously and can only find people with the exact opposite problem (they want to bind only to localhost and the port gets bound to 0.0.0.0)

@insidewhy
Copy link
Author

insidewhy commented Dec 30, 2017

I also tried using a bridged network with the following file at /etc/rkt/10-scram.conf:

{
  "cniVersion": "0.1.0",
  "name": "scram",
  "type": "bridge",
  "bridge": "cni0",
  "ipMasq": true,
  "isGateway": true,
  "ipam": {
    "type": "host-local",
    "subnet": "172.16.28.0/24",
    "routes": [
      { "dst": "0.0.0.0/0" }
    ]
  }
}

Then using --net=scram instead of --net=default but the issue is still the same.

@insidewhy
Copy link
Author

I've found out why it is this way, the following two firewall rules:

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         
RKT-PFWD-SNAT-43042192  all  --  localhost.localdomain !localhost.localdomain 
CNI-f4e9851c1a6b272abc1d5136  all  --  172.16.28.0/24       anywhere             /* name: "default" id: "43042192-0468-424e-8a5d-bd92ebd90a1c" */
Chain RKT-PFWD-SNAT-43042192 (2 references)
target     prot opt source               destination         
MASQUERADE  tcp  --  localhost.localdomain  172.16.28.14         tcp dpt:socks

If I add the same rules again but without the --source localhost.localdomain then the port is exposed as I want.

Don't get it, given I'm using --port name:bindaddress:destport it doesn't seem to make sense.

@insidewhy
Copy link
Author

This is the commit that changed rkt's behaviour:

cd05e8b#diff-adb1d51f0a72db7676d3dcb3d42d010d

Hm, this is looking like a bug where the bind address from the port forwarding specification isn't being respected.

@insidewhy insidewhy changed the title Exposed ports only accessible on localhost Exposed ports only accessible on localhost, "--port name:0.0.0.0:dport" not bound to 0.0.0.0 Jan 1, 2018
@insidewhy insidewhy changed the title Exposed ports only accessible on localhost, "--port name:0.0.0.0:dport" not bound to 0.0.0.0 Exposed ports only accessible on localhost, "--port name:0.0.0.0:dport" only accessible from 127.0.0.1, not 0.0.0.0 Jan 1, 2018
@insidewhy insidewhy changed the title Exposed ports only accessible on localhost, "--port name:0.0.0.0:dport" only accessible from 127.0.0.1, not 0.0.0.0 Exposed ports only accessible on localhost, even with "--port name:0.0.0.0:dport" Jan 1, 2018
@squeed
Copy link
Contributor

squeed commented Jan 3, 2018

FYI, POSTROUTING is only used to masquerade traffic originating from 127.0.0.1. Rewriting the destination is done in the PREROUTING chain.

@insidewhy
Copy link
Author

shrugs The ports are only accessible from 127.0.0.1 without the additional less-restrictive POSTROUTING and RKT-PFWD-SNAT-* entries for whatever reason though.

@insidewhy
Copy link
Author

As we've been unable to fix this my company has had to abandon rkt and re-adopt docker :(

@orthecreedence
Copy link

I'm getting this problem as well. It's weird, rkt works fine on a cluster of EC2/Ubuntu servers. On Linode/Slackware though, can only connect to containers using localhost.

rkt Version: 1.29.0
appc Version: 0.8.11
Go Version: go1.8.3
Go OS/Arch: linux/amd64
Features: -TPM +SDJOURNAL

^ Using this version for both environments.

The Linode servers DO have custom iptables rules to allow traffic between the servers, and I'm wondering if that's the problem...however after disabling my custom rules, going clean-slate, and restarting my container I get the same problem.

@ohjames What OS/environment are you running into this issue on?

@insidewhy
Copy link
Author

@orthecreedence on archlinux and Ubuntu, both with zero iptables rules set up before running rkt. As far as I can tell rkt is just for clusters and from squeed's comment it seems these guys don't understand iptables very well. So back to docker and its silly daemon architecture for my company :(

@insidewhy
Copy link
Author

Also started a PR to try to fix the code but got zero support or even acknowledgement... on such a key issue. Doesn't give me much faith in rkt.

@orthecreedence
Copy link

I saw your PR, and yeah it kind of upsets me that it didn't get acknowledged. I also am really put off by Docker's architecture, so rkt is a natural choice.

This probably won't help you (hopefully it does), but I went mucking around in my iptables rules, and found that the FORWARD chain was set to DROP by default:

Chain FORWARD (policy DROP)
target     prot opt source               destination

I set FORWARD to ACCEPT:

iptables -P FORWARD ACCEPT

And now I can connect to my containers again. It seems so easy it's probably not the answer to your problem, but wanted to share in the off-chance that's it. I'm not sure what the implications of allowing all FORWARDs so I have some reading to do...I set these firewall rules up probably back in 2010 and of course didn't document it =].

@orthecreedence
Copy link

Ha, blanket ACCEPTing on FORWARD just opens the server to the world. Looks like I have more work to do on this...

@insidewhy
Copy link
Author

I have iptables -P FORWARD ACCEPT already but I still see this issue.

@iaguis
Copy link
Member

iaguis commented Feb 21, 2018

Hi! I've tried to reproduce this problem but I couldn't, here's what I did.

Starting from a clean system:

core@core-01 ~ $ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:eb:c5:34 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 82350sec preferred_lft 82350sec
    inet6 fe80::a00:27ff:feeb:c534/64 scope link 
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 08:00:27:7a:51:7e brd ff:ff:ff:ff:ff:ff
    inet 172.17.8.101/24 brd 172.17.8.255 scope global eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe7a:517e/64 scope link 
       valid_lft forever preferred_lft forever
core@core-01 ~ $ sudo iptables -vL
Chain INPUT (policy ACCEPT 138 packets, 7556 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 99 packets, 7868 bytes)
 pkts bytes target     prot opt in     out     source               destination 
core@core-01 ~ $ sudo iptables -vL -t nat
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

I started a pod with nginx:

core@core-01 ~ $ sudo rkt run --insecure-options=image --port=http:0.0.0.0:8080 nginx-latest-linux-amd64.aci --interactive
stage1: warning: no volume specified for mount point "html", implicitly creating an "empty" volume. This volume will be removed when the pod is garbage-collected.
stage1: warning: no volume specified for mount point "html", implicitly creating an "empty" volume. This volume will be removed when the pod is garbage-collected.

The rules were changed like so:

core@core-01 ~ $ sudo iptables -vL -t nat
Chain PREROUTING (policy ACCEPT 1 packets, 76 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    1    76 RKT-PFWD-DNAT-26d23c60  all  --  any    any     anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 1 packets, 76 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RKT-PFWD-DNAT-26d23c60  all  --  any    any     anywhere             anywhere             ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 RKT-PFWD-SNAT-26d23c60  all  --  any    any     localhost           !localhost           
    0     0 CNI-b9a0b6cf3e080d97e40c3d9e  all  --  any    any     172.16.28.0/24       anywhere             /* name: "default" id: "26d23c60-755d-4510-b483-154bf101c02e" */

Chain CNI-b9a0b6cf3e080d97e40c3d9e (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  any    any     anywhere             172.16.28.0/24       /* name: "default" id: "26d23c60-755d-4510-b483-154bf101c02e" */
    0     0 MASQUERADE  all  --  any    any     anywhere            !base-address.mcast.net/4  /* name: "default" id: "26d23c60-755d-4510-b483-154bf101c02e" */

Chain RKT-PFWD-DNAT-26d23c60 (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DNAT       tcp  --  any    any     anywhere             anywhere             tcp dpt:http-alt to:172.16.28.2:80

Chain RKT-PFWD-SNAT-26d23c60 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 MASQUERADE  tcp  --  any    any     localhost            172.16.28.2          tcp dpt:http

Then, from another machine in the network:

core@core-02 ~ $ curl 172.17.8.101:8080
<html>
<head><title>403 Forbidden</title></head>
<body bgcolor="white">
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx/1.8.1</center>
</body>
</html>

@iaguis
Copy link
Member

iaguis commented Feb 21, 2018

I'm using 1.29.0 too:

core@core-01 ~ $ rkt version
rkt Version: 1.29.0
appc Version: 0.8.11
Go Version: go1.9.4
Go OS/Arch: linux/amd64
Features: -TPM +SDJOURNAL

Whereas I have a source of localhost.localdomain. In my comment from 1st January I identified as this being the issue.

I think this depends on what you have in /etc/hosts, I have

127.0.0.1	localhost
::1		localhost

and you probably have something like

127.0.0.1	localhost.localdomain	localhost
::1		localhost.localdomain	localhost

Passing the -n option to iptables should make it display ip addresses instead, and it should display 127.0.0.1.

@insidewhy
Copy link
Author

@iaguis I switched them but it makes no difference. Also whether whether you use -n or not you see any for the SNAT source and I see 127.0.0.1.

@CodyKochmann
Copy link

Since there isn't really a one liner solution to this common problem, I do feel like I should share the two snippets that have just blatantly worked for me that didn't take a ton of setup in order to get pod ports exposed on the hosts public ports.

ssh -nNT -L 0.0.0.0:9200:172.16.28.2:9200 localhost

Note: I am not proud of this at all but it works and is a single step.

ncat -l 0.0.0.0 9200 --sh-exec "ncat 172.16.28.2 9200"

Slightly less ashamed with this one, but still feel like this shouldn't be as difficult as it is to expose a port.

Ive seen people throw together systemd services that used socat to do multithreaded port forwarding but over time its memory footprint inflated until it became an issue.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants