Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

local network container access vulnerability #14041

Closed
phemmer opened this issue Jun 19, 2015 · 33 comments · Fixed by #28257
Closed

local network container access vulnerability #14041

phemmer opened this issue Jun 19, 2015 · 33 comments · Fixed by #28257
Assignees
Labels
area/networking area/security status/needs-attention Calls for a collective discussion during a review session
Milestone

Comments

@phemmer
Copy link
Contributor

phemmer commented Jun 19, 2015

I have have sent several emails about this issue to security@docker.com without receiving a reply (the earliest being 3 months ago), so I'm opening the issue here.

There is a vulnerability that allows anyone on the same network as a docker host to access containers running on that host, regardless of exposed ports.

When docker starts, it enables net.ipv4.ip_forward without changing the iptables FORWARD chain default policy to DROP. This means that another machine on the same network as the docker host can add a route to their routing table, and directly address any containers running on that docker host.

For example, if the docker0 subnet is 172.17.0.0/16 (the default subnet), and the docker host's IP address is 192.168.0.10, from another host on the network run:

ip route add 172.17.0.0/16 via 192.168.0.10
nmap 172.17.0.0/16

The above will scan for containers running on the host, and report IP addresses & running services found.

To fix this, docker needs to set the FORWARD policy to DROP when it enables the net.ipv4.ip_forward sysctl parameter.

This issue is verified still present in docker 1.7.0

See also #11508.

@ewindisch
Copy link
Contributor

I wouldn't classify this as a vulnerability. For some users, it's a feature. Personally, I assign each Docker host a CIDR and use OSPF to create routes to containers directly. I admit, however, this is not necessarily obvious to users and the concerns with ip_forwarding extend beyond access to containers, and in fact, I believe access to containers is the least worrisome issue with ip_forwarding. I would not be keen to have a solution which always configures these iptables rules by default.

For some history, I had already written up some thoughts 2 years ago on this, although the software has changed quite a bit since then:
#2396 (comment)

In part, at that time, I felt that we should configure iptables for users, by at the very least communicate the risk. A big problem we had was that (then) Docker was dependent on LXC, which was already enabling ip_forwarding without installing iptables rules. As a result, I think it was expected that anyone using LXC (and thus Docker) would need to be aware of this and configure their iptables as a best-practice. The problem was more or less punted as a result. Ultimately, however, many developers, not sysadmins are using Docker and it's important that we enforce best-practices, rather than simply encourage them. This is especially true now that Docker is no longer dependent on LXC and finally has network plugins on the horizon.

Finally, I agree we need to have a network configuration option which is more secure out of the box for how most users use Docker out of the box. This should include a means of securing ip_forwarding, and configuring ip_forwarding according to the practice of least-privilege (i.e. only configure the minimum interfaces necessary).

@phemmer
Copy link
Contributor Author

phemmer commented Jun 19, 2015

The reason why I would classify this as a vulnerability is because I might be running a container on my host which should not be exposed. Maybe it's being used for development of a new product or contains some secret information. If I take my laptop to an untrusted network, these containers can be accessed. This is not obvious as the whole point of exposing ports is to make containers accessible outside the host. I do not think one would expect their container to be publicly accessible without exposing the ports for it.

@ewindisch
Copy link
Contributor

@phemmer It's certainly worthy as a bug. In your example you're describing an insecure host configuration, but that doesn't necessarily mean there is a vulnerability in Docker. In many environments this behavior is not insecure and is preferable. I agree we should do more to make sure users don't configure their host insecurely, or use Docker insecurely. An option to configure a filter as you describe may be reasonable.

@cpuguy83
Copy link
Member

Maybe if forwarding was off and docker is enabling it we should make the iptables changes to drop.
Also I'd rather just error out on the daemon if forwarding is disabled... but that's an entirely other issue.

@RRAlex
Copy link

RRAlex commented Mar 16, 2016

To me, this is a serious vulnerability as this broken default contradicts the documentation, UX and security model as it is explained...
It will also make you vulnerable in any networking environment where you don't control all of your neighbours...
Also, the --icc=false --iptables=true options lose their meaning as I'm sure everyone thinks that these options isolate them from the outside, except for the exposed ports, which will indeed be the result if you port scan your machine, but it's not very hard to guess that an IP on 172.1[6-8].0.2 is going to exist most of the time and add a route to test it...
This is, again, a very dangerous default and a misleading one!

Above all, this also breaks any suggestion that an n-tier network can help you secure your docker installation. I'm surprised so many people spotted this (#11508, #3416, ...) as it's a non-intuitive thing to try, but nonetheless important...

So, maybe Docker should set FORWARD to DROP by default and only then allow routing on a per exposed service basis?
Or, if we can't change the default anymore, add some --secure-network option?

Otherwise I think it just makes it harder / more obscure for everyone to secure their installation.
What do you think? :)

@RRAlex
Copy link

RRAlex commented Mar 31, 2016

Wouldn't something like the following work for most IPv4 context?

I'm basically saying, if you're coming through the external default route interface for the an IP other than that interface IP, you're out.
That following script only works in single interface, single IP(v4) situation, but could be made to work with a list of allowed IP / interfaces and than dropping the rest, or even only for the active docker bridges.
But this example is quicker to demonstrate the concept:

#!/bin/bash
EXT_IF=$( ip r s 0.0.0.0/0 | cut -f5 -d" " )
EXT_IPV4=$( ip a s dev ${EXT_IF} | grep "inet " | awk '{print $2}' | sed 's/\/.*//' )

iptables -t mangle -I PREROUTING 1 -i $EXT_IF ! -d $EXT_IPV4 -j DROP

What do you think? Anyone sees an issue with this?
It could simply be added as a flag or as a default sane behaviour.

@sanmai-NL
Copy link

When will this issue be addressed, @thaJeztah?

@phemmer
Copy link
Contributor Author

phemmer commented May 18, 2016

Oh, bit of a delayed response, but something that just didn't click in my brain until now:

@ewindisch wrote:
@phemmer It's certainly worthy as a bug. In your example you're describing an insecure host configuration, but that doesn't necessarily mean there is a vulnerability in Docker.

This is incorrect. The default configuration of the system is secure. Docker is the one altering the kernel state (changing the sysctl flag) and causing the system to become insecure, not the user.


@RRAlex wrote:
So, maybe Docker should set FORWARD to DROP by default and only then allow routing on a per exposed service basis?

Yes, this is what I'm proposing here. The system default is net.ipv4.ip_forward=0. This is effectively the same thing as a iptables FORWARD policy of DROP, thus by setting the policy to DROP when setting ip_forward=1 you're preserving the effective behavior of the system. Now if ip_forward were already set to 1, then docker probably shouldn't touch the FORWARD policy, as the user has obviously changed it, and might have reasons for having done so that would be broken by setting FORWARD to DROP.

@WillemElbers
Copy link

@phemmer:
We tried to reproduce the issue on digital ocean (private network enabled) without success.

We installed two VMs, based on CentOs 7 with firewalld disabled; VM1 with docker-engine installed (tried version 1.7.0 and 1.11.2), and added a route from VM2 to VM1 as described in your report.

We suspect that additional filtering is done by digital ocean, blocking the traffic both in the public and private network.

Any suggestions on how we can reproduce this issue?

Background info:

With docker 1.7.0 we have the following iptables output on VM1:

Chain INPUT (policy ACCEPT)
target     prot opt source               destination

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
DOCKER     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere             ctstate RELATED,ESTABLISHED
ACCEPT     all  --  anywhere             anywhere
ACCEPT     all  --  anywhere             anywhere

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

Chain DOCKER (1 references)
target     prot opt source               destination
ACCEPT     tcp  --  anywhere             172.17.0.3           tcp dpt:irdmi

Ip forwarding is enabled on VM1:

# sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

@phemmer
Copy link
Contributor Author

phemmer commented Jun 2, 2016

@WillemElbers I've never used Digital Ocean, but most cloud providers' networks operate very different from physical networks in that they don't do real layer 2 switching, but emulate it instead.
If you don't have 2 PCs on the same physical network, you can use a local virtual network such as VirtualBox.

I was able to do this successfully with the following Vagrantfile:

Vagrant.configure(2) do |config|
  config.vm.define "node1" do |node|
    node.vm.box = "ubuntu/trusty64"
    node.vm.network "private_network", ip: "10.0.0.10"
  end
  config.vm.define "node2" do |node|
    node.vm.box = "ubuntu/trusty64"
    node.vm.network "private_network", ip: "10.0.0.11"
  end
end

I installed docker on node1 and started a container with a service listening (netcat). The container ended up with IP 172.18.0.2. I then ran the following on node2: ip route add 172.18.0.2/32 via 10.0.0.10 and was able to connect to the container from node2.

@justincormack
Copy link
Contributor

justincormack commented Jun 2, 2016

@WillemElbers I think it won't work if your machines have proper routed ipv4 address (as on Digital Ocean), only if it has an rfc 1918 addresses, as the docker networks are also rfc 1918 addresses that won't be routed onto the internet regardless of routes you create. So it will only happen for example with laptops on a private net, or some cloud providers who internally provide rfc 1918 addresses.

@phemmer
Copy link
Contributor Author

phemmer commented Jun 3, 2016

The IP address has no bearing. It's whether the network supports layer 2 switching. In my example, when node 2 sends the packet to 172.18.0.2, it sends it out to the network with the MAC address of node 1. Node 1 then receives the packet, looks at the destination (172.18.0.2) and then routes it to the container.
Most cloud providers don't let you do this, at least not by default. You can with AWS, but you have to enable the behavior.

@WillemElbers
Copy link

@phemmer Thanks for the clarification, we have been able to reproduce the issue on a layer 2 network as you described.

As far as we have been able to test, changing the iptables FORWARD policy to DROP (iptables -P FORWARD DROP) indeed solves the issue, without affecting container communication or mapped ports on the docker host.

It seems straightforward to set the FORWARD policy to DROP as the default when running the docker daemon. Maybe include a parameter to make this configurable.

@thaJeztah
Copy link
Member

@phemmer @WillemElbers this issue came up recently in an internal discussion, I don't recall the outcome, but perhaps @mrjana can kick in on this

@thaJeztah thaJeztah added the status/needs-attention Calls for a collective discussion during a review session label Jun 3, 2016
@GameScripting
Copy link

GameScripting commented Jun 26, 2016

Are there any updates to this yet?

@jcstover
Copy link

Very short recap of the issue I am facing which is not exactly the descibed issue here. But we also faced the problem that to access to a host is restricted in the INPUT chain preventing outside access, due to the FORWARD rules placed by docker access is granted to everyone to containers with mapped ports.

I would suggest that is a secure environment users are responsible for setting the correct rules in the FORWARD chain and docker would never touch only those rules. This way you can restrict access of traffic going to the DOCKER chain and use the portmapping voodoo of docker.

In version 1.11.1 Docker still wants to control the FORWARD chain as the DOCKER and DOCKER-ISOLATION chain leaving the user with no abillity at all to restrict access.

We have now two options:

  1. Docker controls all of iptables
  2. Docker doesn't touch iptables at all

For me it would be ideal to see a third option
3. Docker changes only the DOCKER chain and the user is responsible for the setup of the configuration to start with

Last option would give the user a posiblity to control access and benifit from dockers ability to control the DOCKER chain.

@phemmer
Copy link
Contributor Author

phemmer commented Jun 27, 2016

@jcstover that's a separate issue entirely.
I see you opened #23987 for this, and I think that is the right action. @cpuguy83 I don't think these should be the same issue. #23987 is about docker not overriding custom rules. This issue is about a vulnerability docker exposes.

@GameScripting
Copy link

GameScripting commented Jun 27, 2016

Yeah, there are seperate issues. But I guess the solution for both problems might be the same. Makes it a little harder to distinguish between both.
Anyway #23987 was closed as dup of this one.

@cpuguy83
Copy link
Member

@phemmer The other one is not about docker overriding rules, it's about docker injecting rules such that custom rules are ignored.
These are one in the same IMO.

@phemmer
Copy link
Contributor Author

phemmer commented Jun 27, 2016

@GameScripting I don't think they would be the same at all. But even if they might be the same, that doesn't mean they should be merged. What if the resolutions aren't the same. Then we have no issue to track whichever issue wasn't solved.

@cpuguy83 That's what I mean. If you put in a custom FORWARD rule to peform an action, and then docker injects it's own rule which causes your rule to be ignored, then it's overridden.

This issue has nothing to do with custom rules. This issue is about docker changing net.ipv4.ip_forward without taking appropriate measures to ensure containers are protected.

@GameScripting
Copy link

GameScripting commented Jun 27, 2016

Yeah, right. Two seperate issues:

  1. Docker changing net.ipv4.ip_forward without adding DROP policy to the FORWARD chain to ensure containers are protected (this issue, local network container access vulnerability #14041)
  2. Docker "overriding" custom iptables rules by modifing the FORWARD chain directly (former Docker should not update FORWARD chain on startup #23987)

Possible solutions might be:

  1. Set default DROP policy to the FORWARD chain
  2. Only auto-add port-mapping rules to the DOCKER chain so it does not override custom iptable rules in the FORWARD chain

@cpuguy83 What do you think? (Why) do you think both issues are same?

@jcstover
Copy link

@GameScripting
as you state it, the 3 option I proposed would make both solution possible. For a good security it is vital that a sys admin can control the iptables configuration. If there was an option to tell the docker daemon not to touch the FORWARD chain, then the user can add a DROP policy in the FORWARD chain and it makes it possible to add fine grained rules to the FORWARD chain.

On the upside the iptables configuration itself works like a charm, it is only just messed up at a restart of docker.

thaJeztah added a commit to thaJeztah/docker that referenced this issue Sep 20, 2019
full diff: moby/libnetwork@92d1fbe...96bcc0d

changes included:

- moby/libnetwork#2429 Updating IPAM config with results from HNS create network call
  - addresses moby#38358
- moby/libnetwork#2450 Always configure iptables forward policy
  - related to moby#14041 and moby/libnetwork#1526

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
docker-jenkins pushed a commit to docker/docker-ce that referenced this issue Sep 23, 2019
full diff: moby/libnetwork@92d1fbe...96bcc0d

changes included:

- moby/libnetwork#2429 Updating IPAM config with results from HNS create network call
  - addresses moby/moby#38358
- moby/libnetwork#2450 Always configure iptables forward policy
  - related to moby/moby#14041 and moby/libnetwork#1526

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Upstream-commit: 75477f0b3c77f2108a6b5586dbc246c52b479941
Component: engine
thaJeztah added a commit to thaJeztah/docker that referenced this issue Sep 24, 2019
full diff: moby/libnetwork@92d1fbe...96bcc0d

changes included:

- moby/libnetwork#2429 Updating IPAM config with results from HNS create network call
  - addresses moby#38358
- moby/libnetwork#2450 Always configure iptables forward policy
  - related to moby#14041 and moby/libnetwork#1526

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 75477f0)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
docker-jenkins pushed a commit to docker/docker-ce that referenced this issue Sep 25, 2019
full diff: moby/libnetwork@92d1fbe...96bcc0d

changes included:

- moby/libnetwork#2429 Updating IPAM config with results from HNS create network call
  - addresses moby/moby#38358
- moby/libnetwork#2450 Always configure iptables forward policy
  - related to moby/moby#14041 and moby/libnetwork#1526

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
(cherry picked from commit 75477f0b3c77f2108a6b5586dbc246c52b479941)
Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Upstream-commit: 559be42fc26048f4069de64f84202803a113413a
Component: engine
burnMyDread pushed a commit to burnMyDread/moby that referenced this issue Oct 21, 2019
full diff: moby/libnetwork@92d1fbe...96bcc0d

changes included:

- moby/libnetwork#2429 Updating IPAM config with results from HNS create network call
  - addresses moby#38358
- moby/libnetwork#2450 Always configure iptables forward policy
  - related to moby#14041 and moby/libnetwork#1526

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
Signed-off-by: zach <Zachary.Joyner@linux.com>
@gwisp2
Copy link

gwisp2 commented Apr 19, 2023

That vulnerability still can be exploited when binding like 127.0.0.1:HOST_PORT:CONTAINER_PORT is used.
For such port mapping, docker creates two iptables rules:

  1. table nat, used in PREROUTING: if daddr is 127.0.0.1 and dport is HOST_PORT, DNAT to CONTAINER_IP:CONTAINER_PORT
  2. table filter, used in FORWARD: if daddr is CONTAINER_IP and dport is CONTAINER_PORT, ACCEPT

The second rule will accept connections that are actually from external hosts.

@polarathene
Copy link
Contributor

polarathene commented May 24, 2023

That vulnerability still can be exploited

Not quite the same 😅

  • The original issue here was about when the default FORWARD chain policy is ACCEPT. That would allow accessing a containers ports (even if they're not explicitly published) due to net.ipv4.ip_forward=1 (set by docker).
  • Published ports to networks that should be private to the docker host like 127.0.0.1:HOST_PORT or 172.17.0.0/16:CONTAINER_PORT can still be routed to from another LAN host via ip route, due to the reasons you've identified.

Thus the attack surface is not as wide, but probably warrants it's own issue (while not the main focus, a related prominent issue/discussion is open at #22054) (EDIT: I opened a specific issue: #45610)

  • Only applies to hosts on a local network AFAIK, thus mostly a risk when connecting to untrusted public networks (like wifi at a cafe / airport), or one of the trusted systems on the network has been compromised (eg: home / corporate network via third-party introducing malware).
  • Apparently some LAN networks may be configured to prevent being able to reach containers this way, but it was easy for me to reproduce between VM guests and also when connecting two servers at cloud vendor via their VPC network.
  • On hosts with firewalld active, the docker networks are placed in the docker zone, which prevents the LAN hosts from being able to route to the container IPs directly. 127.0.0.1 on the lo interface however is still routable (PREROUTING => DNAT), not sure if firewalld can do anything about that 🤷‍♂️ (while UFW doesn't seem to mitigate either)

Possible Mitigation

I am not sure if this introduces regressions, but 127.0.0.1 routing from external hosts should be avoidable with:

# Avoid appyling DNAT rules too early when destination is `127.0.0.1` (delay until OUTPUT chain):
# https://askubuntu.com/questions/579231/whats-the-difference-between-prerouting-and-forward-in-iptables/579242#579242
iptables -t nat -D PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
iptables -t nat -A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER ! -d 127.0.0.1

To prevent connecting to ports at container IPs that were published, I am not familiar with the iptables command needed, but firewalld will protect against that via it's docker zone if that's an option. UPDATE: I detailed some potential options if not using firewalld, though they're probably not as good.

@msimkunas
Copy link

I have already commented on #22054 (comment) but it seems to me that this vulnerability can be prevented by running Docker inside a VM, am I right? Obviously it’s not a proper solution but if your environment allows this (e.g. you’re a developer with root access to your local machine), then perhaps it’s easier and more reliable to run Docker in a VM instead of trying to monkeypatch the iptables rules yourself…

@polarathene
Copy link
Contributor

this vulnerability can be prevented by running Docker inside a VM, am I right?

I produced it with two VM guests running Docker in one: #45610

So not really. Depends on your network configuration. You can use firewalld to avoid it, UFW doesn't seem to have the equivalent protection but I'm not too experienced with configuring that. See my comment prior to yours or the linked issue for more info.

@msimkunas
Copy link

@polarathene what I meant was isolating the Docker environment inside the vm so that it is not accessible outside the VM host. This is different from running two vms on the same network.

@msimkunas
Copy link

msimkunas commented Nov 28, 2023

@polarathene AFAIK your example simply simulates two machines on a single network so the fact that they're vms is not important. What I meant was that using a vm isolates the Docker environment and the VM host's firewall acts as an external firewall against attackers on the same network as the VM host.

Edit: if you have two hosts on the same network (e.g. host A and host B), and then set up docker inside a VM that is running inside host A, then my assumption is that you wouldn’t be able to access Docker containers running in a VM inside host A by initiating requests from host B.

@polarathene
Copy link
Contributor

polarathene commented Nov 28, 2023

What I meant was that using a vm isolates the Docker environment and the VM host's firewall acts as an external firewall against attackers on the same network as the VM host.

👍

if you have two hosts on the same network (e.g. host A and host B),
and then set up docker inside a VM that is running inside host A,
then my assumption is that you wouldn’t be able to access Docker containers running in a VM inside host A by initiating requests from host B

The vulnerability requires the network shared between the Docker host and the Attacker host to support Layer 2 network switching. The link references AWS docs to opt in to such functionality, while in my issue that capability was default on a Vultr VPC network.

It's not my area of expertise. AFAIK, the two hosts need to belong to the same network to route as shown with ip route in my linked issued, the network on the right-side of the command is the one that the two hosts belong to.

  • Thus you can route traffic on host B intended for a subnet through to host A.
  • If host A doesn't have the conditions like my issue notes under "Cause and mitigation options", then nothing should happen AFAIK.
  • If host A does, then the traffic could be routed to the VM, but only to the IP for the guest VM, not any of it's internal networks (like those managed by Docker).
    • I could be mistaken, but I don't have another system available to verify.
    • If host A can reach the Docker containers due to published ports in the Docker host VM, then those are accessible to host B AFAIK. Even though host B normally couldn't connect successfully without host A making the VM port reachable through a common network.

@msimkunas
Copy link

msimkunas commented Nov 29, 2023

@polarathene thank you for your write-up!

My understanding here though is that if host A (the one hosting the VM) does not have IP forwarding enabled, it should not be possible for a same-network attacker to forward packets to the VM running inside host A.

I have easily reproduced the vulnerability by running two Multipass VMs with Docker in one of them and connecting them to the same host-only network. I’m curious to try my suggested setup though. This requires running a nested VM inside of the VMs so it might be tricky but it should be doable. I’ll see what I can do.

To my understanding, though, the real troublemaker here is IP forwarding and if it's disabled on host A, the packets should not be routed. Because Docker is contained inside a VM, the firewall of host A remains simple and has no custom routing rules set up by Docker. The way I see this is host A acts like a router firewall that protects the nested Docker host VM from outside attackers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/networking area/security status/needs-attention Calls for a collective discussion during a review session
Projects
None yet
Development

Successfully merging a pull request may close this issue.