Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Docker-Gen not working with docker-ce 17.12 #270

Closed
raphirm opened this issue Jan 8, 2018 · 12 comments
Closed

Docker-Gen not working with docker-ce 17.12 #270

raphirm opened this issue Jan 8, 2018 · 12 comments

Comments

@raphirm
Copy link

raphirm commented Jan 8, 2018

Hi Guys,

This December, docker had a major update to 17.12. docker-gen is not working anymore and just hangs. unfortunately because of no output, i was not able to get some debugging done. After a downgrade to 17.09, docker-gen was working fine again.

Regards
Raphael

@buchdag
Copy link
Member

buchdag commented Jan 9, 2018

Hi

I'm not having any issues with docker-gen on the hosts that have docker 17.12 installed (Ubuntu 17.04 and 17.10, Debian 8 and Manjaro stable).

@TheLux83
Copy link

TheLux83 commented Jan 10, 2018

Hey @raphirm, do you use Docker for Windows?
I have the same problem. My docker-gen Container won't get up. The other two (nginx-proxy and let's encrypt Companion) are starting without problems.
I've hat this issues since ever, but when you start alle three container again, after you've started them the first time, everything works out.
Unfortunately, with the new Docker 17.12 this isn't true anymore for me.
But I think it has something to do with the way windows shares are used.

My Error Log looks like this:

2018/01/08 19:32:15 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
2018/01/08 19:32:15 Watching docker events
2018/01/08 19:32:16 Error inspecting container: af0fc01706c5aea607fa48ed76cad990ae400946038be54f6e872be187a246e2: No such container: af0fc01706c5aea607fa48ed76cad990ae400946038be54f6e872be187a246e2
2018/01/08 19:32:16 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
2018/01/08 19:32:16 Received event start for container af0fc01706c5
2018/01/08 19:32:16 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
2018/01/08 19:32:16 Received signal: hangup
2018/01/08 19:32:16 Received signal: hangup
2018/01/08 19:32:16 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
2018/01/08 19:36:32 Received signal: terminated
2018/01/08 19:36:32 Received signal: terminated
2018/01/08 19:43:46 Generated '/etc/nginx/conf.d/default.conf' from 14 containers
2018/01/08 19:43:46 Sending container 'nginx-proxy' signal '1'
2018/01/08 19:43:46 Watching docker events
2018/01/08 19:43:46 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
2018/01/08 20:12:18 Received event die for container 9cc42d9e4ff3
2018/01/08 20:12:18 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
2018/01/08 20:12:20 Received event start for container 510c7954c163
2018/01/08 20:12:20 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
2018/01/09 00:12:10 Received event die for container ed7389bc5bf2
2018/01/09 00:12:11 Generated '/etc/nginx/conf.d/default.conf' from 13 containers
2018/01/09 00:12:11 Sending container 'nginx-proxy' signal '1'
2018/01/09 00:12:12 Received event start for container 84cbca2c8062
2018/01/09 00:12:12 Generated '/etc/nginx/conf.d/default.conf' from 14 containers
2018/01/09 00:12:12 Sending container 'nginx-proxy' signal '1'
2018/01/09 06:12:15 Received event die for container 510c7954c163
2018/01/09 06:12:15 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
2018/01/09 06:12:16 Received event start for container 8dcc566591f3
2018/01/09 06:12:16 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
2018/01/09 10:12:14 Received event die for container 84cbca2c8062
2018/01/09 10:12:15 Generated '/etc/nginx/conf.d/default.conf' from 13 containers
2018/01/09 10:12:15 Sending container 'nginx-proxy' signal '1'
2018/01/09 10:12:16 Received event start for container f6375936ca01
2018/01/09 10:12:16 Generated '/etc/nginx/conf.d/default.conf' from 14 containers
2018/01/09 10:12:16 Sending container 'nginx-proxy' signal '1'
2018/01/10 06:22:33 Unable to parse template: open /etc/docker-gen/templates/nginx.tmpl: no such file or directory
2018/01/10 06:25:43 Unable to parse template: open /etc/docker-gen/templates/nginx.tmpl: no such file or directory
2018/01/10 06:37:10 Unable to parse template: open /etc/docker-gen/templates/nginx.tmpl: no such file or directory
2018/01/10 06:37:44 Unable to parse template: open /etc/docker-gen/templates/nginx.tmpl: no such file or directory
2018/01/10 06:37:54 Unable to parse template: open /etc/docker-gen/templates/nginx.tmpl: no such file or directory

As you can see, it cannot find the nginx.tmpl. But nothing has changed since yesterday. Except for the Docker Version.
I've already remapped the share. But this hasn't fixed the problem.

TL;DR: I think it has something to do with how Docker for Windows shares your Windows-Drives. But I don't know how to fix it.

EDIT: Just to confirm my theory: I've checked my other containers which are using Windows-Shares. All of them weren't able to get the share running. I think this is no docker-gen issue, but a docker for windows issue.

@raphirm
Copy link
Author

raphirm commented Jan 11, 2018

Interesting. I use ubuntu as hostsystem and the nginx-proxy. docker-gen just hangs and does not output anything if started. i even tried to get into the container and start the process manually to generate another config file - nothing...

when i have time i try to debug it a bit more, maybe i find out something.

@jwilder
Copy link
Collaborator

jwilder commented Jan 11, 2018

If you send a sigquit signal to docker-gen, it will crash with a stack trace that should help figure out why it is hanging. I have not seen the hang personally.

@valdemarrolfsen
Copy link

I also have this problem with docker 17.12. Is there a fix, or is the best solution to downgrade the docker version?

@ian-axelrod
Copy link

I also have it for docker for mac.

@trennepohl
Copy link

I'm facing the same issue with docker-ce 17.12.

Same scenario as @raphirm

Interesting. I use ubuntu as hostsystem and the nginx-proxy. docker-gen just hangs and does not output anything if started. i even tried to get into the container and start the process manually to generate another config file - nothing...
when i have time i try to debug it a bit more, maybe i find out something.

@jwilder as suggested I've tried to send a sigquit, but nothing happend.

I've downgraded one of our servers to docker-ce 17.09 and I will keep watching it.

I really don't think it is something with docker-gen, because i have tried to run nginx-proxy with different versions of docker-gen and didn't work.

@relvacode
Copy link

relvacode commented Mar 8, 2018

Having the same issue. default.conf is never generated anymore. This started happening without changes to the container verison.

I ran a few SIGQUITs on the docker-gen process and found that even if the call stack is different, the program is always blocked by net/http.(*persistConn).roundTrip.

docker run -ti --rm -v /var/run/docker.sock:/tmp/docker.sock --entrypoint=docker-gen jwilder/nginx-proxy -watch -notify /app/updatessl.sh updatessl /app/nginx.tmpl /etc/nginx/conf.d/default.conf

SIGQUIT: quit
PC=0x45efd1 m=0

goroutine 0 [idle]:
runtime.futex(0xb64f28, 0x0, 0x0, 0x0, 0x0, 0xb642b0, 0x0, 0x0, 0x40fbb4, 0xb64f28, ...)
	/usr/local/go/src/runtime/sys_linux_amd64.s:306 +0x21
runtime.futexsleep(0xb64f28, 0x100000000, 0xffffffffffffffff)
	/usr/local/go/src/runtime/os1_linux.go:40 +0x53
runtime.notesleep(0xb64f28)
	/usr/local/go/src/runtime/lock_futex.go:145 +0xa4
runtime.stopm()
	/usr/local/go/src/runtime/proc.go:1538 +0x10b
runtime.findrunnable(0xc820017500, 0x0)
	/usr/local/go/src/runtime/proc.go:1976 +0x739
runtime.schedule()
	/usr/local/go/src/runtime/proc.go:2075 +0x24f
runtime.park_m(0xc820000180)
	/usr/local/go/src/runtime/proc.go:2140 +0x18b
runtime.mcall(0x7ffe16819b70)
	/usr/local/go/src/runtime/asm_amd64.s:233 +0x5b

goroutine 1 [select]:
net/http.(*persistConn).roundTrip(0xc8200fcd00, 0xc8201df7f0, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/http/transport.go:1473 +0xf1f
net/http.(*Transport).RoundTrip(0xc8200e4480, 0xc8200fb180, 0xc8200e4480, 0x0, 0x0)
	/usr/local/go/src/net/http/transport.go:324 +0x9bb
net/http.send(0xc8200fb180, 0x7ffaadfe3528, 0xc8200e4480, 0x0, 0x0, 0x0, 0xc8201d5420, 0x0, 0x0)
	/usr/local/go/src/net/http/client.go:260 +0x6b7
net/http.(*Client).send(0xc820079440, 0xc8200fb180, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/http/client.go:155 +0x185
net/http.(*Client).doFollowingRedirects(0xc820079440, 0xc8200fb180, 0x9e0930, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/http/client.go:475 +0x8a4
net/http.(*Client).Do(0xc820079440, 0xc8200fb180, 0xc8200f2f58, 0x0, 0x0)
	/usr/local/go/src/net/http/client.go:188 +0xff
github.com/fsouza/go-dockerclient.(*Client).do(0xc8200a8090, 0x904998, 0x3, 0xc8201e6660, 0x51, 0x0, 0x0, 0xc8201e6600, 0x0, 0x2, ...)
	/Users/jason/go/src/github.com/fsouza/go-dockerclient/client.go:417 +0x3ce
github.com/fsouza/go-dockerclient.(*Client).InspectContainer(0xc8200a8090, 0xc82010ed00, 0x40, 0x0, 0x0, 0x0)
	/Users/jason/go/src/github.com/fsouza/go-dockerclient/container.go:339 +0x111
github.com/jwilder/docker-gen.(*generator).getContainers(0xc82012a080, 0x0, 0x0, 0x0, 0x0, 0x0)
	/Users/jason/go/src/github.com/jwilder/docker-gen/generator.go:362 +0x36f
github.com/jwilder/docker-gen.(*generator).generateFromContainers(0xc82012a080)
	/Users/jason/go/src/github.com/jwilder/docker-gen/generator.go:117 +0x46
github.com/jwilder/docker-gen.(*generator).Generate(0xc82012a080, 0x0, 0x0)
	/Users/jason/go/src/github.com/jwilder/docker-gen/generator.go:74 +0x2d
main.main()
	/Users/jason/go/src/github.com/jwilder/docker-gen/cmd/docker-gen/main.go:180 +0x631

goroutine 19 [syscall]:
os/signal.signal_recv(0x0)
	/usr/local/go/src/runtime/sigqueue.go:116 +0x132
os/signal.loop()
	/usr/local/go/src/os/signal/signal_unix.go:22 +0x18
created by os/signal.init.1
	/usr/local/go/src/os/signal/signal_unix.go:28 +0x37

goroutine 21 [IO wait]:
net.runtime_pollWait(0x7ffaadfe49d0, 0x72, 0xc820100000)
	/usr/local/go/src/runtime/netpoll.go:160 +0x60
net.(*pollDesc).Wait(0xc82006dcd0, 0x72, 0x0, 0x0)
	/usr/local/go/src/net/fd_poll_runtime.go:73 +0x3a
net.(*pollDesc).WaitRead(0xc82006dcd0, 0x0, 0x0)
	/usr/local/go/src/net/fd_poll_runtime.go:78 +0x36
net.(*netFD).Read(0xc82006dc70, 0xc820100000, 0x1000, 0x1000, 0x0, 0x7ffaadfdf028, 0xc8200740a0)
	/usr/local/go/src/net/fd_unix.go:250 +0x23a
net.(*conn).Read(0xc82007c050, 0xc820100000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
	/usr/local/go/src/net/net.go:172 +0xe4
net/http.noteEOFReader.Read(0x7ffaadfe4a90, 0xc82007c050, 0xc8200fcd68, 0xc820100000, 0x1000, 0x1000, 0x405cb3, 0x0, 0x0)
	/usr/local/go/src/net/http/transport.go:1687 +0x67
net/http.(*noteEOFReader).Read(0xc8200dd2a0, 0xc820100000, 0x1000, 0x1000, 0xc82003ed1d, 0x0, 0x0)
	<autogenerated>:284 +0xd0
bufio.(*Reader).fill(0xc8200727e0)
	/usr/local/go/src/bufio/bufio.go:97 +0x1e9
bufio.(*Reader).Peek(0xc8200727e0, 0x1, 0x0, 0x0, 0x0, 0x0, 0x0)
	/usr/local/go/src/bufio/bufio.go:132 +0xcc
net/http.(*persistConn).readLoop(0xc8200fcd00)
	/usr/local/go/src/net/http/transport.go:1073 +0x177
created by net/http.(*Transport).dialConn
	/usr/local/go/src/net/http/transport.go:857 +0x10a6

goroutine 22 [select]:
net/http.(*persistConn).writeLoop(0xc8200fcd00)
	/usr/local/go/src/net/http/transport.go:1277 +0x472
created by net/http.(*Transport).dialConn
	/usr/local/go/src/net/http/transport.go:858 +0x10cb

rax    0xca
rbx    0x0
rcx    0xffffffffffffffff
rdx    0x0
rdi    0xb64f28
rsi    0x0
rbp    0x1
rsp    0x7ffe168199d0
r8     0x0
r9     0x0
r10    0x0
r11    0x286
r12    0xd
r13    0x9deba8
r14    0x4
r15    0x8
rip    0x45efd1
rflags 0x286
cs     0x33
fs     0x0
gs     0x0

I can access the socket manually without any issues

docker run -ti --rm -v /var/run/docker.sock:/tmp/docker.sock:ro centos:7 curl -v --unix-socket /tmp/docker.sock http:/containers/json -q -s -o /dev/null -D -
* About to connect() to http port 80 (#0)
*   Trying /tmp/docker.sock...
* Failed to set TCP_KEEPIDLE on fd 3
* Failed to set TCP_KEEPINTVL on fd 3
* Connected to http (/tmp/docker.sock) port 80 (#0)
> GET /containers/json HTTP/1.1
> User-Agent: curl/7.29.0
> Host: http
> Accept: */*
>
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Api-Version: 1.35
Api-Version: 1.35
< Content-Type: application/json
Content-Type: application/json
< Docker-Experimental: false
Docker-Experimental: false
< Ostype: linux
Ostype: linux
< Server: Docker/17.12.0-ce (linux)
Server: Docker/17.12.0-ce (linux)
< Date: Thu, 08 Mar 2018 13:36:17 GMT
Date: Thu, 08 Mar 2018 13:36:17 GMT
< Transfer-Encoding: chunked
Transfer-Encoding: chunked

<
{ [data not shown]
* Connection #0 to host http left intact

So it's almost as if Go is having issues reading from the socket and becoming blocked indefinitely.
I wonder if at least a timeout will prevent docker-gen from blocking forever and raise an error?

EDIT:
A restart of the system has fixed the issue

@vrowley
Copy link

vrowley commented Nov 1, 2018

@relvacode

Can you be more specific regarding "A restart of the system"? For the TL:DR crowd, reboot the host, restart the container, restart dockerd?

And I'm seeing this on:
Client:
Version: 18.06.1-ce
API version: 1.38
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:23:03 2018
OS/Arch: linux/amd64
Experimental: false

Server:
Engine:
Version: 18.06.1-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:25:29 2018
OS/Arch: linux/amd64
Experimental: false

@mickaelperrin
Copy link

I can confirm also that the error gone after a reboot of the docker service (Docker for Mac in my case).

@ArtemBosenko
Copy link

ArtemBosenko commented Aug 7, 2020

sudo service docker restart
on Linux Ubuntu 18.04 LTS
YAW

@buchdag
Copy link
Member

buchdag commented May 16, 2024

Since Docker 17.12 is more than six years old and this issue was never reliably reproduced, I guess we can safely close this.

@buchdag buchdag closed this as completed May 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests