New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker swarm mode: ports on 127.0.0.1 are exposed to 0.0.0.0 #32299
Comments
Yes this should output an error; services (by default) "publish" using the "ingress" network, and do not support specifying an IP-address, as it's not possible to predict which node they end up on (thus not known what IP-addresses are available - although 127.0.0.1 could be possible). This issue is tracking that feature #26696 (and this "epic" tracks other options not (yet) supported by services #25303) The bug here is that docker should produce an error, instead of silently ignoring the option; reproducible using this minimal docker-compose file; version: "3.2"
services:
mongodb:
image: nginx:alpine
ports:
- "127.0.0.1:27017:80" |
ping @dnephin @vdemeester |
@fer2d2 On swarm mode, if you publish something ( To work around this, there is a few ways:
ports:
- mode: host
target: 80
published: 9005 It will do the same as But as @thaJeztah said, "The bug here is that docker should produce an error, instead of silently ignoring the option" 👼 /cc @mavenugo @aboch to see if there would be a way to actually being able to bind it to a specific ip ? (really tricky to achieve because node's ip will be different so..) |
@vdemeester Could I specify localhost as the host target using this notation? ports:
- mode: host
target: 127.0.0.1:80
published: 9005 As it is an extended format for ports configuration, it should work properly. Thanks in advance |
It seems that both target and published are enforced as integer type in the long syntax |
I think this is not the desired behaviour if you are connecting to some services via SSH tunnels. For example, if you want to have your MySQL or MongoDB server on 127.0.0.1 and connect via SSH Tunnel, with Docker Swarm you must expose the database port on 0.0.0.0 or create a custom database container with SSH running inside (and both options are very insecure). There are many database clients using SSH tunnels, like SQL Workbench or Robomongo that can't be used due to this limitation (specific interface binding). |
We have the same problem in our company as @fer2d2, trying to connect Mongobooster with a docker swarm via ssh tunnel. The only solution we found was opening 27017 port and protecting the database with user and password. |
Any news? |
+1 |
1 similar comment
+1 |
Another use case for allowing ip_address:port pair for long form port mapping is for anycast addresses or any other addresses that may be associated to loopback. These would be similar to a 127.0.0.1 address in that they are only visible on the loopback network. A service restricted to nodes with this property may wish to expose a port only on an anycast address in order to avoid port collisions while avoiding iptables rules for port translation. |
Can it possibly be an option when you specify:
Cheers |
+1 |
2 similar comments
+1 |
+1 |
for myself I solved this problem so:
The docker does not touch these rules. just adds your own |
I like your workaround @maxisme, but how do you manage the
The volume belongs to UID of the host user, which is not services:
sshd:
image: [...]/sshd:${version}
configs:
# FIXME: It would be much better to use a bind volume for this, as it
# would always be in sync with the host configuration. So revoking a key
# in the host machine would automatically revoke it in the container. But
# I can't figure out how to give the volume right ownership. It keeps UID
# from the host which doesn't align with the container user.
- source: authorized_keys
target: /root/.ssh/authorized_keys
mode: 0600
configs:
authorized_keys:
file: ~/.ssh/authorized_keys |
I understand that due to the fact that you don't know what host a container will be deployed to you cannot tell the service to bind to a specific host ip address. However often hosts have e.g. north and south bound interfaces. You might want the swarm ports to bind only to the northbound interfaces on all swarm hosts. If the interface names of all the interfaces you want a service to bind to are the same (e.g. eth0), it might be an idea to offer an option to specify an interfacename to bind swarm ports to (in service ports section).
When eth0 is not available on a swarm node the specified port won't be bound to any interface. |
@tad-lispy You should be able to change uid and gid of the container user to be the same as the volume owner on the host. |
Found great article for Ubuntu server users |
…y/firewall bypass - ref. moby/moby#32299 - bump compose file specification to 3.8 (https://github.com/compose-spec/compose-spec/blob/master/spec.md) to support long network syntax
…y/firewall bypass - ref. moby/moby#32299 - bump compose file specification to 3.8 (https://github.com/compose-spec/compose-spec/blob/master/spec.md) to support long network syntax - fixes #458
This should really be prioritized as a security issue. The docs currently say that I'm not expecting a hotfix to make this behave the way people want, but it definitely SHOULD NOT open a security hole when the user follows the recommendations in the documentation. It should hard error or at least be correct in the docs. |
Any updates on this? |
I hope this is high urgent and will soon be fixed, because this will potentially lead to major data breaches. |
Got hacked because of this. Ok, my fault, but still... |
There is another solution, at least for bare metal/servers with virtualization available. Run docker in virtual machine Pros
Cons
|
Heads up: #22054 (comment) (2021-11-05)
...
|
After a bunch of hours researching how to prevent this, I found in the documentation that there's the DOCKER-USER iptables chain for adding rules, that might help blocking ports. Although I used an external firewall for simplicity because iptables is cumbersome. edit: somebody mentioned already here. Whoops. |
I hit the same issue when wanting to access traefik dashboard on different port bound to 127.0.0.1:8080. This way i could use ssh but it listens on 0.0.0.0 instead ... and this issue is from 2017 ... |
2017 and now is 2024 that is sick ... |
I solved this problem without
ports:
- target: 26379 # note: public (internet facing) access is blocked via iptables, see below.
published: 26379
mode: host
Explanation: |
Publishing in host mode and blocking with iptables are both unscalable. |
Here's a scalable solution using UNIX sockets. Compose file: version: '3.8'
services:
mk-socket-dir:
image: alpine
command: mkdir -p /run/test
volumes:
- /run:/run
deploy:
mode: global-job
socket-in:
image: alpine/socat
command: "-dd TCP-L:8081,fork,bind=localhost UNIX:/run/test/test.sock"
volumes:
- /run/test:/run/test
networks:
- public
deploy:
mode: global
socket-out:
image: alpine/socat
command: "-dd UNIX-L:/run/test/test.sock,fork TCP:whoami:80"
volumes:
- /run/test:/run/test
networks:
- internal
deploy:
mode: global
# listens on port 80
whoami:
image: traefik/whoami
hostname: whoami
networks:
- internal
networks:
internal:
driver: overlay
public:
name: host
external: true Test: curl -s http://localhost:8081 |
Since it shares data over a volume, it won't work if there are multiple hosts in your swarm. |
The service deploy mode is global so this volume will be available on each host. @docwhat |
I'm sorry I overlooked that detail. I don't use Swarm anymore. It's a neat solution to route a localhost port, even if you have to run two containers per host per port. |
Description
In docker swarm mode, binding a port to 127.0.0.1 results with the port being open on 0.0.0.0 also. This could be a severe security issue and should be explained in the documentation.
Steps to reproduce the issue:
Describe the results you received:
Describe the results you expected:
The port being only available on 127.0.0.1, at least in the swarm nodes running this service.
Additional information you deem important (e.g. issue happens only occasionally):
Output of
docker version
:Output of
docker info
:docker info for swarm manager:
Additional environment details (AWS, VirtualBox, physical, etc.):
Tested on Digital Ocean's droplets.
The text was updated successfully, but these errors were encountered: