Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

iptables failed - No chain/target/match by that name #16816

Closed
mindscratch opened this issue Oct 7, 2015 · 63 comments
Closed

iptables failed - No chain/target/match by that name #16816

mindscratch opened this issue Oct 7, 2015 · 63 comments

Comments

@mindscratch
Copy link

Bug Report Info

docker version:
Client version: 1.7.1
Client API version: 1.19
Go version (client): go1.4.2
Git commit (Client): 786b29d
OS/Arch (client): linux/amd64
Server version: 1.7.1
Server API verson: 1.19
Go version (server): go1.4.2
Git commit (server): 786b29d
OS/Arch (server): linux/amd64

docker info:
Containers: 41
Images: 172
Storage Driver: devicemapper
Pool Name: docker-253:2-4026535945-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 7.748 GB
Data Space Total: 107.4 GB
Data Space Available: 99.63 GB
Metadata Space Used: 12.55 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.135 GB
Udev Sync Supported: true
Deferred Removal Enabled: true
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-123.el7.x86_64
Operating SYstem: CentOS Linux 7 (Core)
CPUs: 24
Total Memory: 125.6 GiB
Name:
ID:

uname -a:
Linux 3.10.0-123.el7.x86_64 #1 SMP Mon Jun 30 12:09:22 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux

Environment details (AWS, VirtualBox, physical, etc.):
Physical
iptables version 1.4.21

How reproducible:
Random

Steps to Reproduce:

  1. Start container with exposed ports mapped to host ports
  2. Stop container
  3. Repeat, good luck.

Actual Results:

Cannot start container <container id>: iptables failed: iptables --wait -t filter -A DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0.23 --dport 4000 -J ACCEPT: iptables: No chain/
target/match by that name.

Expected Results:

Container starts without a problem.

Additional info:

I'll also mention these containers are being launched via Apache Mesos (0.23.0) using Marathon. Appears similar to #13914.

@GordonTheTurtle
Copy link

Hi!

Please read this important information about creating issues.

If you are reporting a new issue, make sure that we do not have any duplicates already open. You can ensure this by searching the issue list for this repository. If there is a duplicate, please close your issue and add a comment to the existing issue instead.

If you suspect your issue is a bug, please edit your issue description to include the BUG REPORT INFORMATION shown below. If you fail to provide this information within 7 days, we cannot debug your issue and will close it. We will, however, reopen it if you later provide the information.

This is an automated, informational response.

Thank you.

For more information about reporting issues, see https://github.com/docker/docker/blob/master/CONTRIBUTING.md#reporting-other-issues


BUG REPORT INFORMATION

Use the commands below to provide key information from your environment:

docker version:
docker info:
uname -a:

Provide additional environment details (AWS, VirtualBox, physical, etc.):

List the steps to reproduce the issue:
1.
2.
3.

Describe the results you received:

Describe the results you expected:

Provide additional info you think is important:

----------END REPORT ---------

#ENEEDMOREINFO

@cpuguy83
Copy link
Member

cpuguy83 commented Oct 7, 2015

@mindscratch
Copy link
Author

@cpuguy83 looks like some of those have the same error but not quite the same, #13914 seems to be similar.

@cpuguy83
Copy link
Member

cpuguy83 commented Oct 7, 2015

@mindscratch Have you tried turning off firewalld?

@mindscratch
Copy link
Author

@cpuguy83 we're not using firewalld just iptables

@thaJeztah
Copy link
Member

@mindscratch in that issue, upgrading to 1.8.3 seems to resolve the problem; are you still able to reproduce this on 1.8.3 (or 1.9.0)?

@mindscratch
Copy link
Author

I'll have to look at our logs, we put in a cron job that attempts to find the issue and resolve it before it becomes a problem, shoo I haven't noticed. The cron job logs when if it has to fix iptables, so I'll check. I am now running 1.9.0.

@noose
Copy link

noose commented Dec 3, 2015

I have that same problem.

➜  docker  cat docker-compose.yml
poste:
    image: analogic/poste.io
    volumes:
        - "/srv/mail/data:/data"
    ports:
        - 25:25
        - 80:8081
        - 443:8443
        - 110:110
        - 143:143
        - 465:465
        - 587:587
        - 993:993
        - 995:995
➜  docker  docker-compose up poste
Recreating docker_poste_1
WARNING: Service "poste" is using volume "/data" from the previous container. Host mapping "/srv/mail/data" has no effect. Remove the existing containers (with `docker-compose rm poste`) to use the host volume mapping.
ERROR: Cannot start container 187de1f595dc544c503a4bf565d2101c0b0b3805d601ae704d0014750166776e: failed to create endpoint docker_poste_1 on network bridge: iptables failed: iptables -t nat -A DOCKER -p tcp -d 0/0 --dport 995 -j DNAT --to-destination 172.17.0.2:995 ! -i docker0: iptables: No chain/target/match by that name.
 (exit status 1)
➜  docker  docker -v
Docker version 1.9.1, build a34a1d5
➜  docker  docker info
Containers: 2
Images: 73
Server Version: 1.9.1
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 77
 Dirperm1 Supported: true
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.16.0-0.bpo.4-amd64
Operating System: Debian GNU/Linux 7 (wheezy)
CPUs: 8
Total Memory: 23.59 GiB
Name: Debian-60-squeeze-64-minimal
ID: 7PDG:3ZCD:RL4G:KJAE:PZCO:XTUH:JLRX:IIM4:DHXM:TWHY:UMCK:4GUS
WARNING: No memory limit support
WARNING: No swap limit support
➜  docker  docker version
Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5
 Built:        Fri Nov 20 13:06:12 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5
 Built:        Fri Nov 20 13:06:12 UTC 2015
 OS/Arch:      linux/amd64

➜  docker  uname -a
Linux Debian-60-squeeze-64-minimal 3.16.0-0.bpo.4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u6~bpo70+1 (2015-11-11) x86_64 GNU/Linux

What information can I send to you?

@mysterytree
Copy link

I also have the same error
image

@xlight
Copy link

xlight commented Dec 31, 2015

this issue occurs when I restart container after I stop the firewalld

docker version: Docker version 1.9.1, build a34a1d5
docker info:
uname -a: Linux databus0 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux

Provide additional environment details (AWS, VirtualBox, physical, etc.):

List the steps to reproduce the issue:

  1. docker run -d --name=sth -p4444:4444 sometth
  2. killall firewalld
  3. docker retart sth

Describe the results you received:

Error response from daemon: Cannot restart container sth: failed to create endpoint sth on network bridge: iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 4444 -j DNAT --to-destination 172.17.0.5:4444 ! -i docker0: iptables: No chain/target/match by that name.
 (exit status 1)
Error: failed to restart containers: [sth]

Describe the results you expected:
restart ok

Provide additional info you think is important:

----------END REPORT ---------

#ENEEDMOREINFO

@vincentsiu
Copy link

Overview
The following error occurs when trying to run "docker-compose run -d" - but only if 20+ ports are exposed to the host.

ERROR: Cannot start container dcd5227651790c197835e3f2016f8c747bb748f86e95d6492c75f5e3f83ab47d: failed to create endpoint relaydocker_relay_1 on network bridge: iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 33320 -j DNAT --to-destination 172.17.0.2:30903 ! -i docker0: (fork/exec /sbin/iptables: cannot allocate memory)

Bug Report Info

ubuntu@ip-172-31-36-213:~/relay_docker$ docker-compose up -d
Removing relaydocker_relay_1
Recreating 22ac1bb421_22ac1bb421_22ac1bb421_relaydocker_relay_1
ERROR: Cannot start container dcd5227651790c197835e3f2016f8c747bb748f86e95d6492c75f5e3f83ab47d: failed to create endpoint relaydocker_relay_1 on network bridge: iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 33320 -j DNAT --to-destination 172.17.0.2:30903 ! -i docker0:  (fork/exec /sbin/iptables: cannot allocate memory)



ubuntu@ip-172-31-36-213:~/relay_docker$ docker version
Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5
 Built:        Fri Nov 20 13:12:04 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5
 Built:        Fri Nov 20 13:12:04 UTC 2015
 OS/Arch:      linux/amd64
ubuntu@ip-172-31-36-213:~/relay_docker$ docker info
Containers: 69
Images: 563
Server Version: 1.9.1
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 701
 Dirperm1 Supported: false
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.13.0-74-generic
Operating System: Ubuntu 14.04.2 LTS
CPUs: 1
Total Memory: 992.5 MiB
Name: relay-v1
ID: ZXD2:QKYD:UCX3:2KNK:5J7V:OWHH:CUCS:3V2N:LJWT:YV3N:4BLS:ZBYC
Username: vincentsiu
Registry: https://index.docker.io/v1/
WARNING: No swap limit support
Labels:
 provider=amazonec2
ubuntu@ip-172-31-36-213:~/relay_docker$ uname -a
Linux ip-172-31-36-213 3.13.0-48-generic #80-Ubuntu SMP Thu Mar 12 11:16:15 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
ubuntu@ip-172-31-36-213:~/relay_docker$

Dockerfile

FROM ubuntu:14.04

RUN apt-get update && apt-get install -y openssh-server
#RUN apt-get -y install sudo

RUN mkdir -p /var/run/sshd

# configure sshd_config
RUN sed -i "s/PermitRootLogin.*/PermitRootLogin without-password/g" /etc/ssh/sshd_config
RUN sed -i "s/Port .*/Port 2200/g" /etc/ssh/sshd_config
RUN sed -i "s/LoginGraceTime.*/LoginGraceTime 30/g" /etc/ssh/sshd_config
RUN echo "GatewayPorts yes" >> /etc/ssh/sshd_config

# SSH login fix. Otherwise user is kicked off after login
RUN sed 's@session\s*required\s*pam_loginuid.so@session optional pam_loginuid.so@g' -i /etc/pam.d/sshd

ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile

# ssh port exposed for the container
EXPOSE 2200

# listening to these ports for port forwarding
EXPOSE 8079-8080
EXPOSE 9875-9876
EXPOSE 30000-31000
CMD ["/usr/sbin/sshd", "-D"]

Docker-compose.yml

relay:
  restart: always
  build: ./relay
  ports:
    - "2200:22"
    - "8001-9876:8001-9876"
    - "30000-31000:30000-31000"
  command: /usr/sbin/sshd -D

if I try to expose port 30000-31000 in docker-compose.yml, then running 'Docker-compose up -d' will give me the "iptables failed" error.

_ERROR: Cannot start container dcd5227651790c197835e3f2016f8c747bb748f86e95d6492c75f5e3f83ab47d: failed to create endpoint relaydocker_relay_1 on network bridge: iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 33320 -j DNAT --to-destination 172.17.0.2:30903 ! -i docker0: (fork/exec /sbin/iptables: cannot allocate memory)_

If I reduce the number of exposed ports to less than 20, then the container will start without issue.

I have read that I can try restarting the docker daemon with --iptables=false. How can I do that with docker-compose?

@thaJeztah
Copy link
Member

@vincentsiu your issue sounds more related to #11185

@pizzarabe
Copy link

I have a similar problem using docker 1.9.1 and centos7 (1511) on a ESXi VM

docker version
    Client:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5
 Built:        Fri Nov 20 13:25:01 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.1
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   a34a1d5
 Built:        Fri Nov 20 13:25:01 UTC 2015
OS/Arch:      linux/amd64
docker info

Containers: 0
Images: 11
Server Version: 1.9.1
Storage Driver: btrfs
 Build Version: Btrfs v3.16.2
 Library Version: 101
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-327.4.4.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 1
Total Memory: 1.797 GiB
Name: swhost-1.rz.tu-bs.de
ID: YNJD:42IN:VKFR:OBQV:4OF3:EIZV:D7ML:MXTO:FJLL:IGP5:JVQG:5POK
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

If i start the registry v2 container with:

docker run --rm -ti  -v /mnt/registry/content:/var/lib/registry -p 5000:5000 -v /mnt/registry/config/config.yml:/etc/docker/registry/conf.yml -v /etc/pki/tls/docker/:/mnt --name registry registry:2

the port is closed and i am not able to connect

Host is up (0.00025s latency).
Not shown: 998 filtered ports
PORT     STATE  SERVICE
22/tcp   open   ssh
5000/tcp closed upnp
unable to ping registry endpoint https://swhost-1:5000/v0/
v2 ping attempt failed with error: Get https://swhost-1:5000/v2/: dial tcp 134.169.8.97:5000: connection refused
 v1 ping attempt failed with error: Get https://swhost-1:5000/v1/_ping: dial tcp 134.169.8.97:5000: connection refused

according to firewall-cmd, the port is open

firewall-cmd --zone=public --list-all
public (default, active)
  interfaces: eno16780032
  sources: 
  services: dhcpv6-client ssh
  ports: 5000/tcp
  masquerade: no
  forward-ports: 
  icmp-blocks: 
  rich rules:

iptables -L -v -n

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   10   536 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
   10   400 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 ACCEPT     all  --  lo     *       0.0.0.0/0            0.0.0.0/0           
    0     0 FORWARD_direct  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    0     0 FORWARD_IN_ZONES_SOURCE  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    0     0 FORWARD_IN_ZONES  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    0     0 FORWARD_OUT_ZONES_SOURCE  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    0     0 FORWARD_OUT_ZONES  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     icmp --  *      *       0.0.0.0/0            0.0.0.0/0           
    0     0 REJECT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            reject-with icmp-host-prohibited

...

Chain DOCKER (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.2           tcp dpt:5000

if i stop firewalld

systemctl stop firewalld

 docker run --rm -ti  -v /mnt/registry/content:/var/lib/registry -p 5000:5000 -v /mnt/registry/config/config.yml:/etc/docker/registry/config.yml -v /etc/pki/tls/docker/:/mnt --name registry registry:2

Error response from daemon: Cannot start container b6795863c0469c55e89244e12b764ce686948bfdea57542243beabbf81da4441: failed to create endpoint registry on network bridge: iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 5000 -j DNAT --to-destination 172.17.0.2:5000 ! -i docker0: iptables: No chain/target/match by that name.
 (exit status 1)

@rvdh
Copy link

rvdh commented Jan 18, 2016

We also notice this behaviour.

docker version:

Client:
 Version:      1.9.0
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   76d6bc9
 Built:        Tue Nov  3 17:37:20 UTC 2015
 OS/Arch:      linux/amd64

Server:
 Version:      1.9.0
 API version:  1.21
 Go version:   go1.4.2
 Git commit:   76d6bc9
 Built:        Tue Nov  3 17:37:20 UTC 2015
 OS/Arch:      linux/amd64

docker info:

Containers: 12
Images: 592
Server Version: 1.9.0
Storage Driver: devicemapper
 Pool Name: docker-253:0-2354982-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 107.4 GB
 Backing Filesystem: extfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 15.72 GB
 Data Space Total: 107.4 GB
 Data Space Available: 38.99 GB
 Metadata Space Used: 35.24 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.112 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.74 (2012-03-06)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.2.69-xnt-nogr-1.5.3
Operating System: Debian GNU/Linux 7 (wheezy)
CPUs: 16
Total Memory: 15.88 GiB
Name: dev03
ID: F4IG:2KNZ:TABI:SHGC:RWIN:3AYQ:5EX2:XI7N:DOHP:2VXQ:ASDK:RFF6
WARNING: No memory limit support
WARNING: No swap limit support

uname -a:

Linux dev03 3.2.69-xnt-nogr-1.5.3 #1 SMP Thu May 14 21:03:15 CEST 2015 x86_64 GNU/Linux

Provide additional environment details (AWS, VirtualBox, physical, etc.):
This environment is a XenServer virtual host.
iptables v1.4.14

List the steps to reproduce the issue:

  1. Start container with exposed ports mapped to host ports
  2. Stop container
  3. Repeat

Describe the results you received:

root@dev03:~# docker restart foo
Error response from daemon: Cannot restart container foo: failed to create endpoint foo on network bridge: iptables failed: iptables -t filter -A DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0.9 --dport 1234 -j ACCEPT: iptables: No chain/target/match by that name.
 (exit status 1)
Error: failed to restart containers: [foo]

Describe the results you expected:
A successful docker restart.

@LRancez
Copy link

LRancez commented Jan 21, 2016

This is happening to me too.

Error response from daemon: Cannot restart container HAProxy: failed to create endpoint HAProxy on network bridge: iptables failed: iptables --wait -t filter -A DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0.5 --dport 8888 -j ACCEPT: iptables: No chain/target/match by that name.
(exit status 1)

And if I run:

iptables -N DOCKER

iptables: Chain already exists.

FYI: Just to have in mind, I'm running docker-compose with the root user, and I didn't saw anyone in this post running commands with sudo or su.

Although restarting the docker service restores the heath of the system for a while at least, it is a horrible workaround....

Any other alternatives or ETA for when this will be fixed?
Best,

@sea0breeze
Copy link

I have met a similar problem and it was solved by running this command:
# iptables -t filter -N DOCKER
Hope it helps!

@shayts7
Copy link

shayts7 commented Feb 9, 2016

It happened to us as well, but in our case iptables -t filter -L -v -n showed that DOCKER chain exists, only when checking the nat table using iptables -t nat -L -v -n we found out that somehow DOCKER chain was disappear...


Chain PREROUTING (policy ACCEPT 6402K packets, 388M bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain INPUT (policy ACCEPT 981K packets, 62M bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 1001K packets, 63M bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain POSTROUTING (policy ACCEPT 514K packets, 31M bytes)
 pkts bytes target     prot opt in     out     source               destination
  83M 5047M FLANNEL    all  --  *      *       192.168.0.0/16       0.0.0.0/0
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.135       192.168.18.135       tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.167       192.168.18.167       tcp dpt:8081
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.167       192.168.18.167       tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.172       192.168.18.172       tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.186       192.168.18.186       tcp dpt:8081
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.186       192.168.18.186       tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.194       192.168.18.194       tcp dpt:53
    0     0 MASQUERADE  udp  --  *      *       192.168.18.194       192.168.18.194       udp dpt:53
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.197       192.168.18.197       tcp dpt:3000
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.198       192.168.18.198       tcp dpt:1936
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.198       192.168.18.198       tcp dpt:443
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.198       192.168.18.198       tcp dpt:88
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.198       192.168.18.198       tcp dpt:80
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.2         192.168.18.2         tcp dpt:53
    0     0 MASQUERADE  udp  --  *      *       192.168.18.2         192.168.18.2         udp dpt:53
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.4         192.168.18.4         tcp dpt:1936
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.4         192.168.18.4         tcp dpt:443
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.4         192.168.18.4         tcp dpt:88
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.4         192.168.18.4         tcp dpt:80
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.5         192.168.18.5         tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.6         192.168.18.6         tcp dpt:3000
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.8         192.168.18.8         tcp dpt:8081
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.8         192.168.18.8         tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.9         192.168.18.9         tcp dpt:8081
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.9         192.168.18.9         tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.10        192.168.18.10        tcp dpt:8080

Chain FLANNEL (1 references)
 pkts bytes target     prot opt in     out     source               destination
5481K  332M ACCEPT     all  --  *      *       0.0.0.0/0            192.168.0.0/16
 426K   27M MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4

After restarting docker daemon everything worked fine and we could see DOCKER chain came back to nat table:

Chain PREROUTING (policy ACCEPT 5765 packets, 347K bytes)
 pkts bytes target     prot opt in     out     source               destination
 1592 96542 DOCKER     all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 1236 packets, 75057 bytes)
 pkts bytes target     prot opt in     out     source               destination

Chain OUTPUT (policy ACCEPT 3135 packets, 203K bytes)
 pkts bytes target     prot opt in     out     source               destination
    1    77 DOCKER     all  --  *      *       0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 2423 packets, 159K bytes)
 pkts bytes target     prot opt in     out     source               destination
  83M 5047M FLANNEL    all  --  *      *       192.168.0.0/16       0.0.0.0/0
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.135       192.168.18.135       tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.167       192.168.18.167       tcp dpt:8081
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.167       192.168.18.167       tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.172       192.168.18.172       tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.186       192.168.18.186       tcp dpt:8081
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.186       192.168.18.186       tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.194       192.168.18.194       tcp dpt:53
    0     0 MASQUERADE  udp  --  *      *       192.168.18.194       192.168.18.194       udp dpt:53
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.197       192.168.18.197       tcp dpt:3000
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.198       192.168.18.198       tcp dpt:1936
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.198       192.168.18.198       tcp dpt:443
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.198       192.168.18.198       tcp dpt:88
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.198       192.168.18.198       tcp dpt:80
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.2         192.168.18.2         tcp dpt:53
    0     0 MASQUERADE  udp  --  *      *       192.168.18.2         192.168.18.2         udp dpt:53
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.4         192.168.18.4         tcp dpt:1936
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.4         192.168.18.4         tcp dpt:443
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.4         192.168.18.4         tcp dpt:88
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.4         192.168.18.4         tcp dpt:80
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.5         192.168.18.5         tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.6         192.168.18.6         tcp dpt:3000
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.8         192.168.18.8         tcp dpt:8081
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.8         192.168.18.8         tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.9         192.168.18.9         tcp dpt:8081
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.9         192.168.18.9         tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.10        192.168.18.10        tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.2         192.168.18.2         tcp dpt:53
    0     0 MASQUERADE  udp  --  *      *       192.168.18.2         192.168.18.2         udp dpt:53
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.5         192.168.18.5         tcp dpt:3000
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.6         192.168.18.6         tcp dpt:5601
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.7         192.168.18.7         tcp dpt:8201
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.7         192.168.18.7         tcp dpt:8200
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.8         192.168.18.8         tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.9         192.168.18.9         tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.10        192.168.18.10        tcp dpt:8081
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.10        192.168.18.10        tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.11        192.168.18.11        tcp dpt:8081
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.11        192.168.18.11        tcp dpt:8080
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.12        192.168.18.12        tcp dpt:1936
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.12        192.168.18.12        tcp dpt:443
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.12        192.168.18.12        tcp dpt:88
    0     0 MASQUERADE  tcp  --  *      *       192.168.18.12        192.168.18.12        tcp dpt:80

Chain DOCKER (2 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:53 to:192.168.18.2:53
    0     0 DNAT       udp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            udp dpt:53 to:192.168.18.2:53
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:3210 to:192.168.18.5:3000
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:5601 to:192.168.18.6:5601
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8201 to:192.168.18.7:8201
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8200 to:192.168.18.7:8200
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8050 to:192.168.18.8:8080
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:9002 to:192.168.18.9:8080
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8041 to:192.168.18.10:8081
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8040 to:192.168.18.10:8080
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8081 to:192.168.18.11:8081
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:8080 to:192.168.18.11:8080
   27  1620 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:1936 to:192.168.18.12:1936
    0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:443 to:192.168.18.12:443
  139  8340 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:88 to:192.168.18.12:88
   24  1440 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:80 to:192.168.18.12:80

Chain FLANNEL (1 references)
 pkts bytes target     prot opt in     out     source               destination
5489K  332M ACCEPT     all  --  *      *       0.0.0.0/0            192.168.0.0/16
 427K   27M MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4

If someone has a clue for why the chain disappear I'll be more than happy to hear about it.

@fredrikaverpil
Copy link

Exactly the same issue here as @shayts7 is describing. Workaround for now is to restart the daemon:

service docker restart

@1993hzh
Copy link

1993hzh commented Feb 17, 2016

@fredrikaverpil Great! It worked!

@Seraf
Copy link

Seraf commented Mar 16, 2016

Hello everyone,

I'm using coreos and have this problem too but only on my master.

Running iptables -t nat -N DOCKER solves the problem, pods are automatically created and everything is fine. I'm looking to know why this chain is removed on my master and not on my workers.

@ghost
Copy link

ghost commented Mar 17, 2016

Was having this issue. For us it turned out docker was starting before our firewall persistence (iptables-persistent) and its rules were getting overwritten. I resolved by removing the package as we were using it for only 1 rule.

There are ways to keep it working side by side by either ensuring docker starts after iptables-persistent(https://groups.google.com/forum/#!topic/docker-dev/4SfOwCOmw-E) or by adding whatever rules the docker service adds into the persistent iptables configuration(didn't test this).
May be of help @Seraf, @shayts7

This is not a docker bug but maybe it should be addressed in docs or something

@shayts7
Copy link

shayts7 commented Mar 21, 2016

@vlad-vintila-hs Thanks for the tip

@referup-tarantegui
Copy link

referup-tarantegui commented May 18, 2016

Same issue here Ubuntu 14.04 with docker 1.11.1 and docker-compose 1.7.1 no workaround solved the problem.

Solved with a machine reboot, a poor solution by the way.

@Shuliyey
Copy link

This seems to only happen on CentOS 7 for me.

This is what I did

stop firewalld

sudo systemctl stop firewalld
sudo systemctl disable firewalld

Restart your machine

sudo reboot

As long as you've put --restart=always to your docker instance. When your machine is reboot, the docker instance should be running, and the port should be binded. I believe this issue is specificly to CentOS7 family who uses firewalld instead of iptables.

@lxyuuer
Copy link

lxyuuer commented Nov 20, 2017

In centos7.1 and docker 1.10.3-46, I restart docker service then solve the problem.

@cristiroma
Copy link

cristiroma commented Dec 11, 2017

I can consistently replicate the problem using the following steps:

Ob CentOS Linux release 7.3.1611 (Core)

  1. Add/Change iptables rule
  2. Restart iptables
  3. Restart container mapped to local ports

I get the following error:

ERROR: for webfront  Cannot restart container 4cf3aa80c0ca093f311b064c4318477e0d64654e0e3b2921f2e130b3004fe125: driver failed programming external connectivity on endpoint webfront (db42a8b5113b0ed0386a7232004144ba3ee0464eeeee205e04eeac9c19ddad04): iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 127.0.0.1 --dport 8093 -j DNAT --to-destination 172.21.0.7:80 ! -i br-1b5d4184a095: iptables: No chain/target/match by that name.
 (exit status 1)

One fix is to disable the firewall integration (?) described here: #1871 (comment)

@Necmttn
Copy link

Necmttn commented Jan 22, 2018

handy scripts to have it around

docker_rm_all () {
    for c in `docker ps -a | awk '{ print $1 }'`; do
        if [[ "$c" == "CONTAINER" ]];then
            echo "Removing all in 2 seconds. Last chance to cancel.";
            sleep 2;
        else
            docker rm -f $c;
        fi
    done
}

docker_kill_all () {
    for c in `docker ps | awk '{ print $1 }'`; do
        if [[ "$c" == "CONTAINER" ]];then
            echo "Removing all in 2 seconds. Last chance to cancel.";
            sleep 2;
        else
            docker kill $c;
        fi
    done
}

docker_bash () {
    docker exec -ti $1 bash;
}

docker_service_restart ()
{
    if [[ "$1" == "" ]]; then
        echo "please set HTTP_ENV before restart"
        exit 1
    fi

    sudo https_proxy="$1" \
         http_proxy="$1" \
         HTTP_PROXY="$1" \
         HTTPS_PROXY="$1" \
         service docker restart
}

set_proxy () {
    export HTTP_PROXY=http://$1
    export HTTPS_PROXY=https://$1
    export http_proxy=http://$1
    export https_proxy=https://$1
}


unset_proxy () {
    unset HTTP_PROXY
    unset HTTPS_PROXY
    unset http_proxy
    unset https_proxy
}

just add it to your bashrc

@gashev
Copy link

gashev commented Jan 23, 2018

# cat /etc/redhat-release 
CentOS Linux release 7.4.1708 (Core)
# docker version
Client:
 Version:      17.05.0-ce
 API version:  1.29
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 22:06:25 2017
 OS/Arch:      linux/amd64

Server:
 Version:      17.05.0-ce
 API version:  1.29 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 22:06:25 2017
 OS/Arch:      linux/amd64
 Experimental: false

journalctl:

Jan 23 16:27:34 localhost.localdomain kernel: br0: port 3(veth159) entered blocking state
Jan 23 16:27:34 localhost.localdomain kernel: br0: port 3(veth159) entered forwarding state
Jan 23 16:27:34 localhost.localdomain kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth50b629e: link becomes ready
Jan 23 16:27:34 localhost.localdomain kernel: docker_gwbridge: port 2(veth50b629e) entered blocking state
Jan 23 16:27:34 localhost.localdomain kernel: docker_gwbridge: port 2(veth50b629e) entered forwarding state
Jan 23 16:27:34 localhost.localdomain kernel: br0: port 3(veth159) entered disabled state
Jan 23 16:27:34 localhost.localdomain firewalld[638]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C POSTROUTING -m ipvs --ipvs -d 10.255.0.0/16 -j SNAT --to-source 10.255.0.2' failed: iptables: No chain/target/match by that name.
Jan 23 16:27:34 localhost.localdomain kernel: IPVS: __ip_vs_del_service: enter
Jan 23 16:27:34 localhost.localdomain kernel: docker_gwbridge: port 2(veth50b629e) entered disabled state
Jan 23 16:27:34 localhost.localdomain kernel: docker_gwbridge: port 2(veth50b629e) entered disabled state

@vagnerfonseca
Copy link

hi guys,

I'm having an error with iptables.

Error response from daemon: Cannot start container 5f358335562f6e0234ec7fea50f9c5cb6a0b44ec16a6c2f09825fe8ce560a135: iptables failed: iptables -t nat -A DOCKER -p tcp -d 0/0 --dport 80 -j DNAT --to-destination 172.17.0.7:80 ! -i docker0: iptables: No chain/target/match by that name.
 (exit status 1)

cat /etc/centos-release

CentOS release 6.9 (Final)

iptables --version

iptables v1.4.7

docker info

Containers: 3
Images: 37
Storage Driver: devicemapper
 Pool Name: docker-253:0-400615-pool
 Pool Blocksize: 65.54 kB
 Backing Filesystem: extfs
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 1.886 GB
 Data Space Total: 107.4 GB
 Data Space Available: 41.53 GB
 Metadata Space Used: 2.626 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.145 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.117-RHEL6 (2016-12-13)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 2.6.32-431.29.2.el6.x86_64
Operating System: <unknown>
CPUs: 4
Total Memory: 7.684 GiB
Name: acd-web01
ID: VN4G:PLDV:YQ34:B22N:MRET:AUNA:5IGA:DZ66:R6TW:T24B:XWNI:RB7K

@thaJeztah
Copy link
Member

@vagnerfonseca CentOS 6 and kernel 2.6.x hasn't been supported for a long time (last version of docker supporting that was Docker 1.7, which was released three years ago, and has reached end of life a long time ago.

If you want to run Docker, make sure to update to a currently supported release of CentOS 7

@marvec
Copy link

marvec commented Feb 27, 2018

In my case (Manjaro Linux) this was cause by iptables simply not running at all. I had to add docker daemon option --iptables=false to disable any interaction with it.

@Badoot
Copy link

Badoot commented Jun 15, 2018

I ran into this when my default firewalld zone was somehow changed from 'home' to 'public'. I resolved it by changing the default back to home, restarting firewalld, then flushing iptables:

firewall-cmd --set-default-zone=home
firewall-cmd --reload
systemctl restart firewalld
iptables -F

@jonathansd1
Copy link

jonathansd1 commented Jul 31, 2018

Adding my +1.

Running Arch Linux.

Containers: 1
 Running: 0
 Paused: 0
 Stopped: 1
Images: 3
Server Version: 18.05.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.14.52-1-lts
Operating System: Arch Linux
OSType: linux
Architecture: x86_64
CPUs: 6
Total Memory: 11.72GiB
Name: mephisto
ID: BRRC:XMKV:WWAM:77LE:35HV:JGCX:P3MS:QZQX:3GOC:REIC:53Y4:ZEHL
Docker Root Dir: /home/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

I have iptables installed, but not firewalld.

Jul 31 13:24:56 mephisto docker[18190]: /usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint proxy.service (c78e90b3b41c831de60a048d0dcfd73de325e91b2f3c048b27c848ced4972b43):  (iptables failed: iptables --wait -t nat -A DOCKER -p tcp -d 0/0 --dport 8080 -j DNAT --to-destination 172.17.0.2:8080 ! -i docker0: iptables: No chain/target/match by that name.

Only work around so far is to use --net=host which is not necessarily desirable.

@plutext
Copy link

plutext commented Nov 12, 2018

In my case (Manjaro Linux) this was cause by iptables simply not running at all. I had to add docker daemon option --iptables=false to disable any interaction with it.

iptables was causing me grief (on Manjaro, so ultimately I stopped it and following your example set iptables: false. This worked for me. (Had it failed I would next have tried net=host, or resorted to putting Docker into a virtual machine)

@ccccccmd
Copy link

i meet this warning

11月 18 18:42:43 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C FORWARD -o docker0 -j DOCKER' failed: iptables: No chain/target/match by that name.
11月 18 18:42:43 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
11月 18 18:42:43 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2' failed: iptables: No chain/target/match by that name.
11月 18 18:42:43 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
11月 18 18:42:52 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C DOCKER -p tcp -d 0/0 --dport 6379 -j DNAT --to-destination 172.17.0.2:6379 ! -i docker0' failed: iptables: No chain/target/match by that name.
11月 18 18:42:52 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0.2 --dport 6379 -j ACCEPT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
11月 18 18:42:52 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C POSTROUTING -p tcp -s 172.17.0.2 -d 172.17.0.2 --dport 6379 -j MASQUERADE' failed: iptables: No chain/target/match by that name.
11月 18 18:43:31 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C DOCKER -p tcp -d 0/0 --dport 27017 -j DNAT --to-destination 172.17.0.3:27017 ! -i docker0' failed: iptables: No chain/target/match by that name.
11月 18 18:43:31 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER ! -i docker0 -o docker0 -p tcp -d 172.17.0.3 --dport 27017 -j ACCEPT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
11月 18 18:43:31 localhost.localdomain firewalld[20080]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C POSTROUTING -p tcp -s 172.17.0.3 -d 172.17.0.3 --dport 27017 -j MASQUERADE' failed: iptables: No chain/target/match by that name.

@sjcomeau43543
Copy link

Try creating the chain in iptables by running
iptables -N DOCKER

and if that doesn't work, try upgrading docker and iptables

@nijatmursali
Copy link

I have solved the issue by typing
service iptables restart and service docker restart . Hope it helps.

@Freebase394
Copy link

Freebase394 commented Sep 15, 2020

Hi There.
Im runing a VM
INFO:

   Static hostname: n/a
Transient hostname: aIP-OF-MY-MACHINE
         Icon name: computer-vm
           Chassis: vm
        Machine ID: d4047bd0916d41d38b6b97ff7b5f2b3d
           Boot ID: 61456d6912e24569985f0e9343bd8179
    Virtualization: qemu
  Operating System: openSUSE Tumbleweed
       CPE OS Name: cpe:/o:opensuse:tumbleweed:20200817
            Kernel: Linux 5.8.0-1-default
      Architecture: x86-64

Docker Version:

Client:
 Version:           19.03.12
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        48a66213fe17
 Built:             Mon Aug  3 00:00:00 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server:
 Engine:
  Version:          19.03.12
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       48a66213fe17
  Built:            Mon Aug  3 00:00:00 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.1.5_catatonit
  GitCommit:

So, Im working arround a almost 1 week to solve this issue!
My MAIN issue is i have detected some random disconects to my VPS, disconects are afected on all ports lossing all acess!
I made some research and i find on ```/var/log/firewalld logs the issues that I will mention below
OUTPUT:

...
2020-09-15 01:21:23 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -t filter -X DOCKER-ISOLATION' failed: iptables: No chain/target/match by that name.
2020-09-15 01:21:23 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
2020-09-15 01:21:26 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w10 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad rule (does a matching rule exist in that chain?).
...

I already have executed this commands :

iptables -t filter -F
iptables -t filter -X

Then restart Docker Service using below comamnd

ip link delete docker0
systemctl restart docker

I have tried to make some this commands, and deinstalled docker to remove dockers configs...
without much sucess... 👎 ...

It is sad that this is happening! I have some work to do in a production environment

@far0ouk
Copy link

far0ouk commented Nov 24, 2021

sudo systemctl restart docker.socket

@Pictor13
Copy link

It might help others:
if you are connected to internet via a VPN, try disabling it.
Some providers/apps will modify iptables in order to not let traffic pass through except in the created tunnel.

@sam-thibault
Copy link
Contributor

I don't see any recent activity on this issue. I will close it as stale.

@sam-thibault sam-thibault closed this as not planned Won't fix, can't repro, duplicate, stale Apr 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests