Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Major disconnection issue with docker overlay network #27268

Closed
groyee opened this issue Oct 10, 2016 · 59 comments
Closed

Major disconnection issue with docker overlay network #27268

groyee opened this issue Oct 10, 2016 · 59 comments

Comments

@groyee
Copy link

groyee commented Oct 10, 2016

We are using docker 1.12.2rc3 in our production environment (This issues happened also with previous versions). We have about 100 VM with 200 containers. Everything is managed by Docker Swarm standalone (not swarm mode).

Al containers are communicating through the same overlay network. Sometimes, every few hours some container (randomly) cannot communicate with some other random container running on a different host. When I do ping, I get:

screen shot 2016-10-11 at 1 47 09 am

Obviously all containers are up and I can successfully ping the same container from any other container. I couldn't find any logic. The only thing I can tell is that once it happens I have no workaround. Deleting the container or restarting it doesn't help. I did notices that when it happens it happens to all containers running on the same host. So if I have 5 different containers on host A, suddenly they all connot ping to some container running on a different host. At first I thought that maybe this container disconnected from the overlay network but it's not and I can communicate with all other containers except this one. So removing the container from the overlay network and reattaching it doesn't help.

This is a major problem in our production and we have no solution. We have containers running kafka, elastic, redis, mysql, couchbase.... Every few hours, sometimes days, some containers just stop communicating with some other and once it happens it will never ping it again, no matter how many times I restart any of these containers.

@thaJeztah
Copy link
Member

thaJeztah commented Oct 11, 2016

Could you provide some more information;

  • the output of docker version
  • the output of docker info
  • what platform are you running on? Bare metal, Cloud (if so, which?)
  • How you setup overlay networking (external k/v store etc)

Is there anything useful in the daemon logs of that node?

@groyee
Copy link
Author

groyee commented Oct 11, 2016

docker version:

Client:
Version: 1.12.2-rc3
API version: 1.24
Go version: go1.6.3
Git commit: cb0ca64
Built: Thu Oct 6 22:51:38 2016
OS/Arch: linux/amd64

Server:
Version: swarm/1.2.5
API version: 1.22
Go version: go1.5.4
Git commit: 27968ed
Built: Thu Aug 18 23:10:29 UTC 2016
OS/Arch: linux/amd64

(docker info and docker network inspect commands are bellow)

So for example, from the docker network inspect command you can see i have container called: "dockeruser_tasksmanager_1". This container cannot ping to 10.0.7.7 which is container named: "dockeruser_kafka_1".

Any other container in the system can ping dockeruser_kafka_1 And dockeruser_tasksmanager_1 container can successfully ping to any container but the kafka

Sorry for the long list... Let me know if there is a better way to do it.

docker info:

Containers: 290
Running: 218
Paused: 0
Stopped: 72
Images: 2199
Server Version: swarm/1.2.5
Role: primary
Strategy: spread
Filters: health, port, containerslots, dependency, affinity, constraint
Nodes: 67
anomaly1-prod: 192.168.0.20:2376
└ ID: LBI6:XWGX:PVAE:4AQH:MMCW:TT2T:YTV2:H6GT:UY3T:3YBB:36GJ:6JOR
└ Status: Healthy
└ Containers: 3 (3 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=anomaly, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:44Z
└ ServerVersion: 1.12.1
batchprocessing1-prod: 192.168.0.18:2376
└ ID: ATBA:5XCX:FXJT:GD3C:GRWO:VTY3:NWLS:JKF6:SUIO:JTUH:PRNE:YZSU
└ Status: Healthy
└ Containers: 4 (1 Running, 0 Paused, 3 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 14.38 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=batchprocessing, machinetype=C, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:30Z
└ ServerVersion: 1.12.2-rc3
batchprocessing2-prod: 192.168.0.19:2376
└ ID: JWPK:PLKE:GYDX:SFQR:42W3:KNUM:P3ZB:U7IK:SMG3:6U5F:EPPF:IFRF
└ Status: Healthy
└ Containers: 5 (3 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 14.38 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=batchprocessing, machinetype=C, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:53Z
└ ServerVersion: 1.12.2-rc3
categorization1-prod: 192.168.0.15:2376
└ ID: XISZ:MXAF:VE4T:WA6S:DXGA:NPJK:SJOY:UWB6:QFGE:JRWV:VVW2:RSZE
└ Status: Healthy
└ Containers: 4 (4 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=categorization, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:37Z
└ ServerVersion: 1.12.1
categorization2-prod: 192.168.0.16:2376
└ ID: A3IH:KCOY:XTD7:LFCB:4O66:CKHU:VFOS:6XCT:QZSM:LJF6:WMUF:PII7
└ Status: Healthy
└ Containers: 4 (4 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=categorization, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:26:06Z
└ ServerVersion: 1.12.1
categorization3-prod: 192.168.0.17:2376
└ ID: FZVJ:7A4C:2OS2:4TGX:ZF7M:WGD3:DCO5:V6N3:HPWK:LQ32:CWOG:2YNC
└ Status: Healthy
└ Containers: 4 (4 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=categorization, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:42Z
└ ServerVersion: 1.12.1
categorization4-prod: 192.168.0.55:2376
└ ID: IS4Z:NIT7:FAYR:44UH:YH2H:ZBL7:JSJP:IGRN:OYJX:LNI6:I4ZV:HK4R
└ Status: Healthy
└ Containers: 4 (4 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=categorization, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:10Z
└ ServerVersion: 1.12.1
categorization5-prod: 192.168.0.56:2376
└ ID: SHMR:PZOU:6ZI5:677G:4UUY:7DGI:SSD2:GX6B:Z37M:6FN6:HBQR:GLDK
└ Status: Healthy
└ Containers: 4 (4 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=categorization, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:30Z
└ ServerVersion: 1.12.1
categorization6-prod: 192.168.0.60:2376
└ ID: U74R:MQ5S:3VOK:SSVN:TR4Z:MUAL:I7XF:UOIR:35W2:AURS:AUWZ:FEUS
└ Status: Healthy
└ Containers: 4 (2 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=categorization, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:45Z
└ ServerVersion: 1.12.1
categorization7-prod: 192.168.0.61:2376
└ ID: ONKL:CG5H:S5SP:YYSM:UV2H:EZ2Z:CU2P:QVBD:QM52:UFI4:7LWS:QOTT
└ Status: Healthy
└ Containers: 4 (2 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=categorization, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:26:07Z
└ ServerVersion: 1.12.1
categorization8-prod: 192.168.0.62:2376
└ ID: 273S:GXFJ:Q7CL:WKZ2:OII5:3I4M:VLSN:TTD6:UQQQ:GZT4:AFRX:YOPT
└ Status: Healthy
└ Containers: 4 (2 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=categorization, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:26:06Z
└ ServerVersion: 1.12.1
categorization9-prod: 192.168.0.63:2376
└ ID: DQA2:FHYP:HHEA:PYNM:75YX:KZCX:TUMW:JFUI:6GXB:O4YA:3GM7:M73K
└ Status: Healthy
└ Containers: 4 (2 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=categorization, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:41Z
└ ServerVersion: 1.12.1
categorization10-prod: 192.168.0.64:2376
└ ID: VZBZ:PDXJ:MW5V:3YB2:LSUD:UGCC:XYAQ:P7AG:OS3O:7CWY:UR6J:WXHX
└ Status: Healthy
└ Containers: 4 (2 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=categorization, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:25Z
└ ServerVersion: 1.12.1
couchbase1-prod: 192.168.0.21:2376
└ ID: EL2P:HZLK:EMU3:OFAP:YKG5:KW4Z:7MLX:TU6D:3RVD:F27X:GSD2:4RYT
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 4
└ Reserved Memory: 0 B / 28.85 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=couchbase, machinetype=E, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:26:04Z
└ ServerVersion: 1.12.2-rc2
couchbase2-prod: 192.168.0.52:2376
└ ID: 4DPM:D4TS:VSUX:AZE2:5J4Z:V3CV:3GXO:GIKD:CREF:XXLQ:TQLO:BPFU
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 4
└ Reserved Memory: 0 B / 28.85 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=couchbase, machinetype=E, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:26:06Z
└ ServerVersion: 1.12.2-rc3
couchbase3-prod: 192.168.0.53:2376
└ ID: PBEN:DBWR:M5TV:VJW5:AHIE:RI2G:YNVD:XMXR:D3QY:5KA4:UFXZ:WB25
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 4
└ Reserved Memory: 0 B / 28.85 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=couchbase, machinetype=E, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:26:00Z
└ ServerVersion: 1.12.2-rc3
counters1-prod: 192.168.0.33:2376
└ ID: EWCB:7CQD:ML2S:A26M:3GIS:ZFHH:VTXV:BEIB:4B2R:PGYF:UDCE:ZUB5
└ Status: Healthy
└ Containers: 6 (6 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=counters, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:26:07Z
└ ServerVersion: 1.12.1
elasticclient1-prod: 192.168.0.26:2376
└ ID: E7MT:GEQR:ZWXT:ZS4S:AF6T:UHEM:TSE5:ILIR:4UYA:HFHG:WJ5D:PY3A
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=elasticclient, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:45Z
└ ServerVersion: 1.12.2-rc3
elasticclient2-prod: 192.168.0.27:2376
└ ID: 6SBS:KNAX:SPD6:XXRR:OMPQ:ZEX7:TMFA:DFTV:CMEG:ECY3:HPEB:6GYZ
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=elasticclient, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:39Z
└ ServerVersion: 1.12.2-rc3
elasticclient3-prod: 192.168.0.28:2376
└ ID: ECWV:B3TU:PBVI:GVST:X62O:3D4G:OAGG:XKBV:4JP2:AKQX:AA3D:2GFH
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=elasticclient, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:56Z
└ ServerVersion: 1.12.2-rc3
elasticdata1-prod: 192.168.0.11:2376
└ ID: OTPK:GZNT:XSOB:6K3V:HRWA:HERR:M67N:FOJ7:YIUU:MFUW:B75R:IEZS
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 8
└ Reserved Memory: 0 B / 28.85 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=elasticdata, machinetype=H, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:39Z
└ ServerVersion: 1.12.2-rc3
elasticdata2-prod: 192.168.0.45:2376
└ ID: KKOS:LVLF:3M7D:ZALC:BRMC:GSMC:MWD3:HCRW:5NCL:DMVY:OAST:2YNN
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 8
└ Reserved Memory: 0 B / 28.85 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=elasticdata, machinetype=H, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:23Z
└ ServerVersion: 1.12.2-rc3
elasticdata3-prod: 192.168.0.46:2376
└ ID: 2COC:EPPZ:UPPA:RQCZ:6764:ZT4S:S3AH:KSWL:W6G5:INVF:GOTK:T7VU
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 8
└ Reserved Memory: 0 B / 28.85 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=elasticdata, machinetype=H, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:25Z
└ ServerVersion: 1.12.2-rc3
elasticdata4-prod: 192.168.0.47:2376
└ ID: 3E7L:DRBQ:SWVF:KXPL:MOCF:F6YI:NJFK:XXO3:QU6K:HCHM:XRH5:JOU3
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 8
└ Reserved Memory: 0 B / 28.85 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=elasticdata, machinetype=H, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:26:05Z
└ ServerVersion: 1.12.2-rc3
elasticdata5-prod: 192.168.0.48:2376
└ ID: Y76B:JSGN:SDDR:E2JG:G7KF:SICE:WMAO:FN65:BVKV:7N4T:UE23:G2SR
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 8
└ Reserved Memory: 0 B / 28.85 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=elasticdata, machinetype=H, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:39Z
└ ServerVersion: 1.12.2-rc3
elasticdata6-prod: 192.168.0.49:2376
└ ID: AN6L:H7VP:OJPW:YNGZ:2Q7U:BBVR:3UUF:PQON:II3F:FVFN:WMIS:2ID6
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 8
└ Reserved Memory: 0 B / 28.85 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=elasticdata, machinetype=H, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:55Z
└ ServerVersion: 1.12.2-rc3
elasticdata7-prod: 192.168.0.50:2376
└ ID: H7S4:OM7T:7D2H:OF7A:FCJT:IGET:IKVQ:FH3R:CYEG:PMBE:TAUI:F2XH
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 8
└ Reserved Memory: 0 B / 28.85 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=elasticdata, machinetype=H, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:56Z
└ ServerVersion: 1.12.2-rc3
elasticdata8-prod: 192.168.0.72:2376
└ ID: AQHR:QN2R:D6BZ:BTB7:4RYL:45CC:L2TB:DSXA:7UD7:7SHV:62SH:W5LM
└ Status: Healthy
└ Containers: 3 (0 Running, 0 Paused, 3 Stopped)
└ Reserved CPUs: 0 / 8
└ Reserved Memory: 0 B / 28.85 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=elasticdata, machinetype=H, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:26:00Z
└ ServerVersion: 1.12.2-rc3
elasticdata9-prod: 192.168.0.73:2376
└ ID: GLWQ:AUQ2:DOAH:N4VI:RNVO:PBI2:RZET:J2UB:CZPN:AAYM:RSYA:ZBRD
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 8
└ Reserved Memory: 0 B / 28.85 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=elasticdata, machinetype=H, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:32Z
└ ServerVersion: 1.12.2-rc3
elasticdata10-prod: 192.168.0.74:2376
└ ID: SA7B:ARA6:PQU7:AHP7:Y7RH:2YK3:6SFL:3UI6:GAQD:ACH2:GXYU:LBDA
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 8
└ Reserved Memory: 0 B / 28.85 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=elasticdata, machinetype=H, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:20Z
└ ServerVersion: 1.12.2-rc3
elasticmaster1-prod: 192.168.0.42:2376
└ ID: O3ZQ:2HEB:HOSZ:6PGI:SFUS:B7SW:74BE:NZ7P:COR4:LC42:ZFSW:MZJF
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=elasticmaster, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:53Z
└ ServerVersion: 1.12.2-rc3
elasticmaster2-prod: 192.168.0.43:2376
└ ID: YTEK:WHBU:2SE7:MBQU:6CSB:4OKZ:EYFG:OS6U:BBUO:P53W:LMJE:HSMJ
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=elasticmaster, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:17Z
└ ServerVersion: 1.12.2-rc3
elasticmaster3-prod: 192.168.0.44:2376
└ ID: 5XBZ:NOJE:HBLC:DSFD:7FZC:6AG2:XRRF:4BKY:VVTF:JGDJ:XKYD:PCWH
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=elasticmaster, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:27Z
└ ServerVersion: 1.12.2-rc3
kafka1-prod: 192.168.0.22:2376
└ ID: 5NMF:TC42:LPWQ:TUHA:DESD:DSOZ:PJN7:FHXZ:J4AO:WCCK:VZJU:YFWA
└ Status: Healthy
└ Containers: 4 (2 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=kafka, machinetype=A, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T15:13:31Z
└ ServerVersion: 1.12.2-rc3
kafka2-prod: 192.168.0.23:2376
└ ID: 5NDH:T7R3:FNCQ:KRCD:DYWR:IPKP:N5E7:UQWS:L652:WBJU:B3AC:XALX
└ Status: Healthy
└ Containers: 4 (2 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=kafka, machinetype=A, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:46Z
└ ServerVersion: 1.12.2-rc3
kafka3-prod: 192.168.0.41:2376
└ ID: DHHL:3RRQ:UBQT:MVEP:FNSG:YZWW:VEHL:BODL:DHJ7:Y3LM:3AVU:CLEA
└ Status: Healthy
└ Containers: 3 (1 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=kafka, machinetype=A, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:45Z
└ ServerVersion: 1.12.2-rc3
logsingest1-prod: 192.168.0.10:2376
└ ID: IRRT:MFUQ:PRST:KRLG:JPB3:UQIH:HTQZ:HQMI:UEEL:VPIC:KBSU:SOVY
└ Status: Healthy
└ Containers: 5 (5 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 4
└ Reserved Memory: 0 B / 8.178 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=logsingest, machinetype=F, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:57Z
└ ServerVersion: 1.12.1
logsingest2-prod: 192.168.0.12:2376
└ ID: KUE7:ON2S:5ZJA:LUYB:C3PL:ZMRQ:3UFF:FB4I:KTK6:Y2JU:WQNO:AAYE
└ Status: Healthy
└ Containers: 5 (5 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 4
└ Reserved Memory: 0 B / 8.178 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=logsingest, machinetype=F, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:44Z
└ ServerVersion: 1.12.1
logsingest3-prod: 192.168.0.65:2376
└ ID: DTNR:U5OU:VQ3G:D5QI:BK4B:BFDM:EGXG:MZYN:ZXYX:S3PW:SSVE:RLPH
└ Status: Healthy
└ Containers: 5 (5 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 4
└ Reserved Memory: 0 B / 8.178 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=logsingest, machinetype=F, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:26:05Z
└ ServerVersion: 1.12.1
logsingest4-prod: 192.168.0.76:2376
└ ID: XR3B:VL4Y:N5Z6:MMIJ:TXLM:ETWB:RWDN:ZIZ4:QCJV:LRSF:4BLO:U46C
└ Status: Healthy
└ Containers: 5 (5 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 4
└ Reserved Memory: 0 B / 8.178 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=logsingest, machinetype=F, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:38Z
└ ServerVersion: 1.12.1
misc1-prod: 192.168.0.58:2376
└ ID: UNJV:X24R:4HHN:QBSM:26JE:557L:P2SN:ROI2:DIO7:RG3H:RDCL:66XL
└ Status: Healthy
└ Containers: 6 (6 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=misc, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:46Z
└ ServerVersion: 1.12.1
mysql1-prod: 192.168.0.24:2376
└ ID: XQXD:4UBK:CD4J:PBLY:XH7U:F7RQ:NG2W:AHRE:KVMI:QWXF:IX2Z:3D3J
└ Status: Healthy
└ Containers: 3 (3 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 4
└ Reserved Memory: 0 B / 14.38 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=mysql, machinetype=G, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:32Z
└ ServerVersion: 1.12.2-rc1
rabbitmq1-prod: 192.168.0.51:2376
└ ID: W76V:2SSI:NGU5:AQ7S:34BP:WXDL:ZT5I:QH6N:52J7:7VSH:B44O:34IB
└ Status: Healthy
└ Containers: 3 (3 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=elasticmaster, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:26:02Z
└ ServerVersion: 1.12.1
rabbitmq2-prod: 192.168.0.54:2376
└ ID: EHJD:N2N7:42FF:Z2D5:I4IX:JIDY:FZL2:CNOR:GT6U:IT52:BQBY:TH3P
└ Status: Healthy
└ Containers: 3 (3 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=elasticmaster, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:45Z
└ ServerVersion: 1.12.1
statsingest1-prod: 192.168.0.36:2376
└ ID: NMWC:DDRQ:7WD3:UNJI:TIXG:S4PJ:2BBU:V3U6:PGRF:NHV5:J7NR:YJRR
└ Status: Healthy
└ Containers: 5 (5 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=statsingest, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:57Z
└ ServerVersion: 1.12.2-rc2
statsingest2-prod: 192.168.0.37:2376
└ ID: I3J3:U24U:RXS4:IG2A:RDJI:YZMJ:AG76:PBF4:WLP3:W5CQ:3JWD:HMIE
└ Status: Healthy
└ Containers: 5 (3 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=statsingest, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:23Z
└ ServerVersion: 1.12.2-rc2
statsingest3-prod: 192.168.0.38:2376
└ ID: 6MYP:GQB4:62NH:O7LA:R6XA:HL7E:LJ5T:DNGD:2YI7:OEGV:FFRZ:DWSH
└ Status: Healthy
└ Containers: 4 (4 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=statsingest, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:32Z
└ ServerVersion: 1.12.2-rc2
statsingest4-prod: 192.168.0.57:2376
└ ID: GTD2:7ECC:AGA5:5EXW:SOPN:HQ7E:B4VH:SEXI:MEDL:WSLI:Q73D:XDAX
└ Status: Healthy
└ Containers: 5 (5 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=statsingest, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:53Z
└ ServerVersion: 1.12.2-rc2
statsingest5-prod: 192.168.0.68:2376
└ ID: OXYU:5FP6:TJF6:LU2A:ROYT:QQRS:IJVH:UWCN:MV2N:6IGZ:A2SK:HOZG
└ Status: Healthy
└ Containers: 5 (5 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=statsingest, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:45Z
└ ServerVersion: 1.12.1
statsingest7-prod: 192.168.0.70:2376
└ ID: XWX2:WZOE:MMDA:CWIO:FDJI:UCAF:2QWQ:KSHM:JQPP:44SH:OZCD:PICF
└ Status: Healthy
└ Containers: 5 (5 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=statsingest, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:59Z
└ ServerVersion: 1.12.1
statsingest8-prod: 192.168.0.71:2376
└ ID: RATL:ESMB:C2OC:YT2C:J5C7:PS7I:OUQ3:HHMB:C4HM:TCIW:T263:PISF
└ Status: Healthy
└ Containers: 5 (5 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=statsingest, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:35Z
└ ServerVersion: 1.12.1
swarm-master1-prod: 192.168.0.7:2376
└ ID: 3QMG:LEEC:YDC7:43CZ:TLKB:4OTF:RNT4:77JW:E23L:OLIY:ANGI:ZPFL
└ Status: Healthy
└ Containers: 2 (2 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 4
└ Reserved Memory: 0 B / 14.38 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=swarm, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:26:05Z
└ ServerVersion: 1.12.2-rc2
syslogserver1-prod: 192.168.0.75:2376
└ ID: 6BKS:IAPU:RPW4:JESV:MTHP:5J4D:CJPQ:2O7X:CBPN:2GSW:GFBI:PCWX
└ Status: Healthy
└ Containers: 3 (3 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=syslog, machinetype=A, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:52Z
└ ServerVersion: 1.12.1
useralerts1-prod: 192.168.0.9:2376
└ ID: YO2X:LGNQ:AR3C:QE22:OFIS:XG5I:I26C:ZHFW:6NRY:CW72:TNPQ:OJT4
└ Status: Healthy
└ Containers: 4 (2 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: containerslots=4, kernelversion=4.4.0-36-generic, machinerole=useralerts, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:46Z
└ ServerVersion: 1.12.2-rc3
useralerts2-prod: 192.168.0.25:2376
└ ID: APUX:APG6:F4JP:TUVV:UYNY:ZH46:ZASI:GWNZ:HT34:VBLJ:ASJI:2TUH
└ Status: Healthy
└ Containers: 3 (2 Running, 0 Paused, 1 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 4.043 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=useralerts, machinetype=D, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:10Z
└ ServerVersion: 1.12.2-rc3
webapi1-prod: 192.168.0.29:2376
└ ID: QNSK:SFKU:HIQN:7JPJ:I56F:46YS:WJTV:V2PJ:ATOJ:L5DV:5VX4:CNXT
└ Status: Healthy
└ Containers: 4 (2 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=webapi, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:50Z
└ ServerVersion: 1.12.2-rc2
webapi2-prod: 192.168.0.30:2376
└ ID: EQCK:AJY3:FSLX:643K:Y5RU:RDBC:POTP:LRJQ:LILX:OIAR:MKBH:YH7S
└ Status: Healthy
└ Containers: 5 (5 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=webapi, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:50Z
└ ServerVersion: 1.12.1
webapi3-prod: 192.168.0.31:2376
└ ID: IUNK:GXB3:A6PX:X2AS:5BJF:GAUT:HZWU:GX3K:YJCC:UAWN:W7OO:EKJ5
└ Status: Healthy
└ Containers: 5 (5 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=webapi, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:26:02Z
└ ServerVersion: 1.12.1
webapi4-prod: 192.168.0.32:2376
└ ID: VSOC:YZ4O:EE2P:F5PM:IM6V:OJD4:ZF4I:3SSG:ZKAF:3IJK:JP67:BL53
└ Status: Healthy
└ Containers: 5 (4 Running, 0 Paused, 1 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=webapi, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:50Z
└ ServerVersion: 1.12.2-rc1
webapi5-prod: 192.168.0.59:2376
└ ID: 3LXE:GJCH:PVPA:JLOX:JS5R:CSYF:4UW6:HJMH:6JBI:FDKZ:SXLB:MFY6
└ Status: Healthy
└ Containers: 4 (4 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=webapi, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:41Z
└ ServerVersion: 1.12.1
webapi6-prod: 192.168.0.67:2376
└ ID: XT73:ATDK:BDO7:EVBE:NGCE:FD5Z:LR6B:5IGK:P2CI:XBBL:WERX:Q2TH
└ Status: Healthy
└ Containers: 4 (4 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-38-generic, machinerole=webapi, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:55Z
└ ServerVersion: 1.12.1
webfrontend1-prod: 192.168.0.39:2376
└ ID: 5KOQ:TWQ4:4R3V:XSTA:EVGN:4OA3:DTRV:3D7F:23X6:ZQL4:HKR5:WC36
└ Status: Healthy
└ Containers: 3 (3 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=webfrontend, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:36Z
└ ServerVersion: 1.12.1
webfrontend2-prod: 192.168.0.40:2376
└ ID: HEHF:ECG2:MCDD:32YI:N7WC:OIUT:ZBX6:75YH:3XQL:6BQN:JDVY:M6YN
└ Status: Healthy
└ Containers: 3 (3 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=webfrontend, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:59Z
└ ServerVersion: 1.12.1
ws1-prod: 192.168.0.34:2376
└ ID: UQBT:H57P:QLJO:D3NH:EC2C:RJVP:53IA:JHZF:42OB:MOXZ:LEOG:O6SO
└ Status: Healthy
└ Containers: 22 (20 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=ws, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:40Z
└ ServerVersion: 1.12.2-rc3
ws2-prod: 192.168.0.35:2376
└ ID: O6VG:UPQH:LRGB:W5UP:LDXK:WIZS:XA6F:MMVF:IF5A:CPHY:UCFY:LOBE
└ Status: Healthy
└ Containers: 22 (20 Running, 0 Paused, 2 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=ws, machinetype=B, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:50Z
└ ServerVersion: 1.12.2-rc3
wslb1-prod: 192.168.0.13:2376
└ ID: EXTR:OXNH:OZ5Q:3NSD:PZ6L:QPZV:GTBN:TW5W:SE5U:L3WA:P7C4:MNDN
└ Status: Healthy
└ Containers: 3 (3 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=wslb, machinetype=A, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:25:51Z
└ ServerVersion: 1.12.1
wslb2-prod: 192.168.0.14:2376
└ ID: N6JR:J53Y:YVGH:WMBW:PI3T:D4UJ:XPKZ:GL32:SEAP:NL7A:EYP4:3LDQ
└ Status: Healthy
└ Containers: 3 (3 Running, 0 Paused, 0 Stopped)
└ Reserved CPUs: 0 / 2
└ Reserved Memory: 0 B / 7.144 GiB
└ Labels: kernelversion=4.4.0-36-generic, machinerole=wslb, machinetype=A, operatingsystem=Ubuntu 16.04.1 LTS, provider=azure, storagedriver=aufs
└ UpdatedAt: 2016-10-11T17:26:04Z
└ ServerVersion: 1.12.1
Plugins:
Volume:
Network:
Swarm:
NodeID:
Is Manager: false
Node Address:
Security Options:
Kernel Version: 4.4.0-38-generic
Operating System: linux
Architecture: amd64
CPUs: 212
Total Memory: 728.8 GiB
Name: 8bbc54d48869
Docker Root Dir:
Debug Mode (client): false
Debug Mode (server): false
WARNING: No kernel memory limit support

docker network inspect:

[
{
"Name": "dockeruser_my-net",
"Id": "d025aa804d79cc3d6919c30e3488e838e85b75c5a41b67a9c367a8871370d9d1",
"Scope": "global",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "10.0.7.0/24",
"Gateway": "10.0.7.1"
}
]
},
"Internal": false,
"Containers": {
"00a215889c6bf3056cd8c60e7c2c722fa5601d4c31f6f3248511c7f56f4c0c69": {
"Name": "dockeruser_ws_statistics_6",
"EndpointID": "d880f96767bb6e1de85531e9166b95fb7b82a856c92b3cc99f7618d20fb2bc96",
"MacAddress": "02:42:0a:00:07:82",
"IPv4Address": "10.0.7.130/24",
"IPv6Address": ""
},
"023d0547ecea9114da1131f2ecdf040215aaa29709eaa128e8106c850dd21735": {
"Name": "dockeruser_webfrontend_2",
"EndpointID": "d02e4ae174414155f5a930a7057fe5d8cf540728a1eacffce90b21bf4b833b2b",
"MacAddress": "02:42:0a:00:07:8f",
"IPv4Address": "10.0.7.143/24",
"IPv6Address": ""
},
"0361cda17188bd245a0aef184e6c9b2ef982712668ea3e5ad38cc3bc08b42520": {
"Name": "dockeruser_ws_maintenance_1",
"EndpointID": "839295a7f24db07d4f760929a3182a8979829b337db690cda5dc1c66d45385b7",
"MacAddress": "02:42:0a:00:07:95",
"IPv4Address": "10.0.7.149/24",
"IPv6Address": ""
},
"07d1ddfe40b613014a823e2a82175311b90c3495fcb7502e711366a65e2e7406": {
"Name": "dockeruser_webapi_2",
"EndpointID": "764c723488210269762f5f024ae872c4d8b2ffbf2c0ad4643fc67ffeb6ed9bdd",
"MacAddress": "02:42:0a:00:07:6c",
"IPv4Address": "10.0.7.108/24",
"IPv6Address": ""
},
"0b0094eccdfcc904c0309b930b8ff852d84754aea6e1ad4ca17a637eda9dbc13": {
"Name": "dockeruser_statsingest_4",
"EndpointID": "73d3deb814f4015a74657c428b41ba563afacd2a3af6f629c96d5c2567c71ec1",
"MacAddress": "02:42:0a:00:07:42",
"IPv4Address": "10.0.7.66/24",
"IPv6Address": ""
},
"1096235e0e612231c3ac8fe1447c01122907744b09da766aa5b191864c7e1965": {
"Name": "dockeruser_ws_maintenance_7",
"EndpointID": "524bc792a4663761322eb2ecca972defd9caf6e7b4c32b8f39627d9adc97952f",
"MacAddress": "02:42:0a:00:07:96",
"IPv4Address": "10.0.7.150/24",
"IPv6Address": ""
},
"121026bc16692a732e61f52ed897f41bc4049a2416a7082182608a3417678744": {
"Name": "dockeruser_statsingest_8",
"EndpointID": "2994c198ebb7d37e90ca2c562d41e42df3bfb42624879a002c34c499503a4990",
"MacAddress": "02:42:0a:00:07:48",
"IPv4Address": "10.0.7.72/24",
"IPv6Address": ""
},
"156ef51ebc55447302742ca573a7441db98eff203cf2b84ba7d4ffba877d1d55": {
"Name": "dockeruser_logsingest_6",
"EndpointID": "1334ac4bc70c9679fce87ec3095c8ce405235f142cceb729bcf4272df12227d9",
"MacAddress": "02:42:0a:00:07:5e",
"IPv4Address": "10.0.7.94/24",
"IPv6Address": ""
},
"1608203ec7eb8c0dfa3eb2e0f2a81f29a64d922ff1298d4ae9af2dd513c7447d": {
"Name": "dockeruser_statsingest_5",
"EndpointID": "9d405d98beb3f9ddab8197521675897b88432bd2ec3d25e68cd9d925c8677b69",
"MacAddress": "02:42:0a:00:07:43",
"IPv4Address": "10.0.7.67/24",
"IPv6Address": ""
},
"16587eff7eb39c3c746f5854012e484f910fbdc74f42bb19a028ca1cfa5a0578": {
"Name": "dockeruser_batchprocessing_3",
"EndpointID": "963cee47b2be485fcbb3904e2a02864fa16981892e26b3d04cfec60bf065bf1c",
"MacAddress": "02:42:0a:00:07:24",
"IPv4Address": "10.0.7.36/24",
"IPv6Address": ""
},
"16fa347a92098a647d5f2c88c4a9a41bf4aa3e0db3e88f1afdb756a09e788f41": {
"Name": "dockeruser_statsingest_1",
"EndpointID": "14c0337006d66771633ee2b77065810217a83b1d9f763b8254b7d6f79e253917",
"MacAddress": "02:42:0a:00:07:41",
"IPv4Address": "10.0.7.65/24",
"IPv6Address": ""
},
"196b7e14e2ef3d0623b76e2165708544712501a626d665686f07a335ded77828": {
"Name": "dockeruser_useralerts_1",
"EndpointID": "70147298e823dee10419e8aa78e251696ba7516a932d704e092092fe018de349",
"MacAddress": "02:42:0a:00:07:51",
"IPv4Address": "10.0.7.81/24",
"IPv6Address": ""
},
"1b0ca2f327b1187ec392b624f89fb07cd7d573c188f46a58d46fe0fb576bc21b": {
"Name": "dockeruser_elasticclient_1",
"EndpointID": "11dec083251a8dd4004791cae2047ae22b295eb0a75c021f995e52833e30608c",
"MacAddress": "02:42:0a:00:07:1e",
"IPv4Address": "10.0.7.30/24",
"IPv6Address": ""
},
"1dcb411e6d1bdf567516710925053a71d58754ecfe7b39538ec4f1cfee386048": {
"Name": "dockeruser_statsingest_14",
"EndpointID": "a2c421ca53d4f3b2961d7fe6344258f71d9f7894fbef711946241ffecb493029",
"MacAddress": "02:42:0a:00:07:4d",
"IPv4Address": "10.0.7.77/24",
"IPv6Address": ""
},
"1de944286fb61728b53917e5b4d68b544eb72b46ebc8b13d99244b5f3571ac3a": {
"Name": "dockeruser_logsingest_7",
"EndpointID": "e651488da96e6dee539f3dbfe9ba01873b4f3c4fe07719677e6a405b92eb056c",
"MacAddress": "02:42:0a:00:07:55",
"IPv4Address": "10.0.7.85/24",
"IPv6Address": ""
},
"2061f9213b573145b8d70cc38519097c2166d2de7d305c658b6d528e851ccd60": {
"Name": "dockeruser_ws_maintenance_3",
"EndpointID": "246062c213daad8c79d13b73eca5f723c50db9c862aa4ac7ea065bff39f4e889",
"MacAddress": "02:42:0a:00:07:8c",
"IPv4Address": "10.0.7.140/24",
"IPv6Address": ""
},
"214a00dd9412f8213bdd4716a0346431e1b24cd00f756426ffc3b233ffcab632": {
"Name": "dockeruser_ws_maintenance_2",
"EndpointID": "79c95969f30c5cf6fd52f8dccc9a3e5e3495fae5f590e1cb9f2b2d31e2a173d4",
"MacAddress": "02:42:0a:00:07:8a",
"IPv4Address": "10.0.7.138/24",
"IPv6Address": ""
},
"215b330a6b404b332964f5b35d14340ef43f6f13e3a8e8539d766c6a6b92c104": {
"Name": "dockeruser_elasticdata_5",
"EndpointID": "aed6116de0a35891716bf91cfd374c2b53b2dc5ae260f9ceb154cb891e9a6c30",
"MacAddress": "02:42:0a:00:07:11",
"IPv4Address": "10.0.7.17/24",
"IPv6Address": ""
},
"21b862ca434e3268187b8aa057a3c560be998e9ddf2eeb382c836dc947c12808": {
"Name": "dockeruser_ws_maintenance_6",
"EndpointID": "efcbc7426adfa5e0930c315f4b4d30ec7668b2866b9bdd0aa4005ebc2ded58fa",
"MacAddress": "02:42:0a:00:07:8d",
"IPv4Address": "10.0.7.141/24",
"IPv6Address": ""
},
"2247f8cfb82ad40475ddd9735b40653e5ac8138f503dde5f7ef697986a7b4d2c": {
"Name": "dockeruser_ws_queries_4",
"EndpointID": "6350edf49d4980e460a561c25d4b66129c6e4150d62658145f5be03ac2b86537",
"MacAddress": "02:42:0a:00:07:98",
"IPv4Address": "10.0.7.152/24",
"IPv6Address": ""
},
"224ab18f6df8cb35d22f53abe2b2f305b3c3b942e8ab5411d76561d619a19e49": {
"Name": "dockeruser_statsingest_10",
"EndpointID": "67e657d6c33a1f659e43baf2a6cba51fe8e51aa89584093cdfc4a49ee68c1cae",
"MacAddress": "02:42:0a:00:07:45",
"IPv4Address": "10.0.7.69/24",
"IPv6Address": ""
},
"22853e976f2f177a4c2e50ae7ab909699e73f171c129e0200b57e0b5d3f1aa77": {
"Name": "dockeruser_ws_alerts_6",
"EndpointID": "e1ce73833b27aa2f6375099e4ef371c360b7ee90233f3abf3bf4323f68e342c5",
"MacAddress": "02:42:0a:00:07:7d",
"IPv4Address": "10.0.7.125/24",
"IPv6Address": ""
},
"237ac1edb4245c6c3d6f1bbe38d3301d837cb7600fd50b2efb60031a75221041": {
"Name": "dockeruser_ws_statistics_10",
"EndpointID": "4f3085270374d1bf869a8d6a08a842936fcc59e6d49f222c7308e4223468ce41",
"MacAddress": "02:42:0a:00:07:80",
"IPv4Address": "10.0.7.128/24",
"IPv6Address": ""
},
"25856c051d5430401e6eaa36814f8b636c04a5160347f0883035e360f9c60d5c": {
"Name": "dockeruser_couchbase3_1",
"EndpointID": "43077fffd43a1253dc5cdc9a383846c51411539860b7e8cffac4baee7b9b211e",
"MacAddress": "02:42:0a:00:07:17",
"IPv4Address": "10.0.7.23/24",
"IPv6Address": ""
},
"27c444203ab5c00fd4aac224219b8e41068ff08c116bae525a4565526aae770f": {
"Name": "dockeruser_ws_queries_3",
"EndpointID": "d41d5b50d2df2470b082fa71ee27de277e1288755bb1ca44db8ac52ec5f4973f",
"MacAddress": "02:42:0a:00:07:9f",
"IPv4Address": "10.0.7.159/24",
"IPv6Address": ""
},
"2fa35e28f2ba2c536e79085aafe2ab0c2e5253981f7c165c2314727392ad6361": {
"Name": "dockeruser_ws_statistics_8",
"EndpointID": "d183cd5144a66be2fce6d79e8a7e5c635c450fec6bc4d3e30981ad2dd3838af7",
"MacAddress": "02:42:0a:00:07:81",
"IPv4Address": "10.0.7.129/24",
"IPv6Address": ""
},
"2febbe8a9a103357f9697601af76385569cd798df933b87be14b8a3d2700e259": {
"Name": "dockeruser_ws_alerts_10",
"EndpointID": "f1ad6e1f308bcf12088777cc24db8deee88de7a16586af7ff9d55dd56481fe29",
"MacAddress": "02:42:0a:00:07:79",
"IPv4Address": "10.0.7.121/24",
"IPv6Address": ""
},
"31cacd3029eabe55e7a1bcfb49d1c152a56c12e75eb35227cfce4c3b28de3351": {
"Name": "dockeruser_elasticdata_8",
"EndpointID": "43831dc67046badf1f3c667a13445036b43c9b9450a56bcf080dd4f7d6234431",
"MacAddress": "02:42:0a:00:07:0f",
"IPv4Address": "10.0.7.15/24",
"IPv6Address": ""
},
"322ca4f6ae21a71ff0a98da14dcd912cb8fd410fbe96b65dd536a6ec63d03708": {
"Name": "dockeruser_webapi_5",
"EndpointID": "97d25d298134c844af5dc1d750fc891668c8b01752505ca9925db01206d202a4",
"MacAddress": "02:42:0a:00:07:73",
"IPv4Address": "10.0.7.115/24",
"IPv6Address": ""
},
"387dfc399a9c5f45b142b8645cb6761189e4af261db9e43a43c8588fb9dae755": {
"Name": "dockeruser_notificationmanager_2",
"EndpointID": "47c84d2f768c2d1e069ea96b5ea79f95ba4cffe2a83944df92c980bd2fdc436c",
"MacAddress": "02:42:0a:00:07:a5",
"IPv4Address": "10.0.7.165/24",
"IPv6Address": ""
},
"3ad4cb05ed41dc206d6676e15ac1997de9d1caa5f46fc23d127e33774e72c212": {
"Name": "dockeruser_elasticmaster_2",
"EndpointID": "11338b9f854c41e055d43df0334db6a08d4d1f4a722f697a70c61e15ec5582d6",
"MacAddress": "02:42:0a:00:07:0a",
"IPv4Address": "10.0.7.10/24",
"IPv6Address": ""
},
"3eb347f4c45e1606ee281a313d3f653b5e304bdd97f4a175efd1f6e603a8f63b": {
"Name": "dockeruser_elasticdata_2",
"EndpointID": "5a0fe2c0781877524741a0cc62bf21e1b14b0f123865d8b91d14b24e29dc85fe",
"MacAddress": "02:42:0a:00:07:10",
"IPv4Address": "10.0.7.16/24",
"IPv6Address": ""
},
"3f70de2581723b2318e374ca3f019e6a9dcdfb48ab404e30e9b05ac430745a51": {
"Name": "dockeruser_ws_queries_1",
"EndpointID": "c3448a6bf583e9cd0cacb6efcc9b8305d2a3d1f809b902813d6ae340cddec63d",
"MacAddress": "02:42:0a:00:07:9c",
"IPv4Address": "10.0.7.156/24",
"IPv6Address": ""
},
"3f7bf19b7e4a1d59145f9060e75b164c17d4dcc13ceafc37078136c8db61f709": {
"Name": "dockeruser_statsingest_6",
"EndpointID": "5ef96f7823f658e0d8dcde48a4742e4a417b53c1ea18277b947b18c9572bfa3a",
"MacAddress": "02:42:0a:00:07:40",
"IPv4Address": "10.0.7.64/24",
"IPv6Address": ""
},
"48c99de263efe826cd4f8867fa161c188f822705aacc944af5f3f5827ab8c80d": {
"Name": "dockeruser_elasticdata_6",
"EndpointID": "3a01b4cad29da9e7ef780b568eb3adfd3e1999499b0a560033d76e82cc8e4c8c",
"MacAddress": "02:42:0a:00:07:0e",
"IPv4Address": "10.0.7.14/24",
"IPv6Address": ""
},
"498fd73172cc5232af2c6ff37dde140d51622cbbc8528a1caa185916a4798d46": {
"Name": "dockeruser_logsingest_3",
"EndpointID": "cded313d257424837de885ccd617fc41f0c613eecbe57892e9f4f95140a76661",
"MacAddress": "02:42:0a:00:07:52",
"IPv4Address": "10.0.7.82/24",
"IPv6Address": ""
},
"499edd1c6a31153b9a996270d9b6ca43ae3a4637169f65308634699a0fb8b956": {
"Name": "dockeruser_wslb_1",
"EndpointID": "2a202b8c3037836feff46e6a63d4467e5f4a7be8fc791b46d8a80dd3841145bd",
"MacAddress": "02:42:0a:00:07:1c",
"IPv4Address": "10.0.7.28/24",
"IPv6Address": ""
},
"4a113f1ffe308953a2dffe60d22fcb466246efca9c28630fb9bc0544cab66755": {
"Name": "dockeruser_elasticmaster_3",
"EndpointID": "65a679125b97753977ada148406b9b9b57130fd51739ab3e8d02568d8d8552cb",
"MacAddress": "02:42:0a:00:07:09",
"IPv4Address": "10.0.7.9/24",
"IPv6Address": ""
},
"4b60af50bbc93f399368d36f7ddfc2720cc686859511245b4fae4d7dc8abf91c": {
"Name": "dockeruser_notificationmanager_1",
"EndpointID": "eb593daca1ecbfef2482c3177656fbe3bf57307102626142a4cb46a0696c8b95",
"MacAddress": "02:42:0a:00:07:a3",
"IPv4Address": "10.0.7.163/24",
"IPv6Address": ""
},
"4d2655e268988c1a7f53cb4833d09f199efdcb015e45a8312c0d05d398dd477f": {
"Name": "dockeruser_ws_queries_9",
"EndpointID": "935c12db07a804c6b99ac977d0c1c09c037d799e7faf3acd8c798f6f182af14e",
"MacAddress": "02:42:0a:00:07:9d",
"IPv4Address": "10.0.7.157/24",
"IPv6Address": ""
},
"4d545420313b607c3c0977d9eb72ce43d7ebd136b353dd262217ac81e53def33": {
"Name": "dockeruser_logsingest_1",
"EndpointID": "5c2f9ef01946255061927c37562ca7480826c8b38e24e631cc640b46e9a73061",
"MacAddress": "02:42:0a:00:07:53",
"IPv4Address": "10.0.7.83/24",
"IPv6Address": ""
},
"4e7e14bf5018054bd74e1246c113c665e18b54228069731b43aeedf236ea3318": {
"Name": "dockeruser_categorization_11",
"EndpointID": "aa8847ddb28d87aeb487eb6b1c5ed26b6dcedfc031e9292b0fa54b4f12d3515e",
"MacAddress": "02:42:0a:00:07:2e",
"IPv4Address": "10.0.7.46/24",
"IPv6Address": ""
},
"4ead9d446830f41cf035f917a5365d5fd2d777a00f806e659ed3545e5f46536a": {
"Name": "dockeruser_statsingest_17",
"EndpointID": "40af0cc154e628f634e8fc0ce348383ef1504b483e38a750959e7303e69d7709",
"MacAddress": "02:42:0a:00:07:3c",
"IPv4Address": "10.0.7.60/24",
"IPv6Address": ""
},
"52544dfb75dd350313ef20d40dae5d0bf607ff6c8a0f414c91678f23e37b8217": {
"Name": "dockeruser_ws_statistics_3",
"EndpointID": "abbe9deb74155650c05116186c0dfc76badc4d140a6e613cb33756b9125e2c59",
"MacAddress": "02:42:0a:00:07:87",
"IPv4Address": "10.0.7.135/24",
"IPv6Address": ""
},
"55fb8df600804b1d1685cd0b665ad412d5fc2a1a9eed6c029ee98b28cb5e5072": {
"Name": "dockeruser_statsingest_16",
"EndpointID": "e68fbe8e24917a7bf811d8aa7149ed7c824d892ca4f47cf79b4521e5fbf4bde1",
"MacAddress": "02:42:0a:00:07:46",
"IPv4Address": "10.0.7.70/24",
"IPv6Address": ""
},
"56cb257b95bfad6f628c83d6f32f6d328495c62fefccbd7ba105ed5e84d1882d": {
"Name": "dockeruser_rabbit1_1",
"EndpointID": "9b23056c9073f4f2414bbd32841d06daca844742b9fa9fc91ce93dcedd5b913a",
"MacAddress": "02:42:0a:00:07:04",
"IPv4Address": "10.0.7.4/24",
"IPv6Address": ""
},
"58c7dd431caff5422d4c02b282b7c844bade13e9e350f240700dbd718baf5b43": {
"Name": "dockeruser_counters_general_2",
"EndpointID": "f872e37a1748221abf70b3eed8fc26b9c95fa7bc5c2d2db7d6531870888d3c4c",
"MacAddress": "02:42:0a:00:07:66",
"IPv4Address": "10.0.7.102/24",
"IPv6Address": ""
},
"60c22e9b8cd6bb29c078f752d99e6b7227edb01e6cde7468a2f46dde177e126f": {
"Name": "dockeruser_batchprocessing_4",
"EndpointID": "52af9a20a1daee0e7b020de811ebd05637061acde7e3dd5b4f6293a672626a7a",
"MacAddress": "02:42:0a:00:07:25",
"IPv4Address": "10.0.7.37/24",
"IPv6Address": ""
},
"61b785f5a9bd6143cb816ef8a5dfcd778c162535d1422c30b583c5b9db2b596c": {
"Name": "dockeruser_zookeeper_1",
"EndpointID": "99a178365d79e03d831f685333239b3c9da6e0dd740d2c81c461fb6a1f1f5e47",
"MacAddress": "02:42:0a:00:07:06",
"IPv4Address": "10.0.7.6/24",
"IPv6Address": ""
},
"6500749fb6936a25aa697b2a2187e24a8df630d8aeb3805980fd28bf57261924": {
"Name": "dockeruser_ws_alerts_4",
"EndpointID": "ae5a22fdd5b6ccf27fb06004403b95d5b0cb415ce3321f6f4c5b6a318c98a89f",
"MacAddress": "02:42:0a:00:07:7c",
"IPv4Address": "10.0.7.124/24",
"IPv6Address": ""
},
"65057f7d4edff5346af43f9e368a902565eba053524a1b0500658a72cb67ee5f": {
"Name": "dockeruser_categorization_2",
"EndpointID": "96d7fdcf3d3f449f0db170baa540f00a7d041f8af95ce513da4584ec68d7f946",
"MacAddress": "02:42:0a:00:07:38",
"IPv4Address": "10.0.7.56/24",
"IPv6Address": ""
},
"672ce76b24e5ca5b3d66a0fbdbeb00351adb28237a07436a54b57f2fd78b0447": {
"Name": "dockeruser_statsingest_15",
"EndpointID": "5250bb76f99930cc9c7fa9c4f28c74dd694ded1cf40463ecce47ba16daf4c4ef",
"MacAddress": "02:42:0a:00:07:44",
"IPv4Address": "10.0.7.68/24",
"IPv6Address": ""
},
"6b1a589d039ed02b6ab2ff0c2ff0ba46a89a66c256d767185c51c945a38cc758": {
"Name": "dockeruser_statsingest_9",
"EndpointID": "6f611234443a5a127e42c6afaaf1dee9f3a4d179a04f867ff09b7b5f5a0fd910",
"MacAddress": "02:42:0a:00:07:3b",
"IPv4Address": "10.0.7.59/24",
"IPv6Address": ""
},
"6c81d7770666e652ec4ff75733c87946bf62e49dd8b5f219c52f2c84e3132c39": {
"Name": "dockeruser_ws_maintenance_9",
"EndpointID": "c9a4df0acb157faee249f13623a1404eba28b5dbd4378926c5ee0506ac4137d2",
"MacAddress": "02:42:0a:00:07:92",
"IPv4Address": "10.0.7.146/24",
"IPv6Address": ""
},
"6cbc07e80b7abb67d1b925aa6d807993b46719645f966c452c38169ee8313959": {
"Name": "dockeruser_rabbit2_1",
"EndpointID": "3316c2d131872229e432b302e837e8f074d43509356fa7ea010be41facb408ba",
"MacAddress": "02:42:0a:00:07:03",
"IPv4Address": "10.0.7.3/24",
"IPv6Address": ""
},
"6d264b04ebc64674b4c45d69acabc959705c591434073a55bb29e89231d5ae37": {
"Name": "dockeruser_categorization_18",
"EndpointID": "c90bda7728f2be7b7b9cfea332a22bdca5bc9e27de323ff4bca8b5b612bf7f43",
"MacAddress": "02:42:0a:00:07:28",
"IPv4Address": "10.0.7.40/24",
"IPv6Address": ""
},
"71440861d695ca420574c9654cd41b4d87586598e720d33a04ea71f2a9959a35": {
"Name": "dockeruser_useralerts_4",
"EndpointID": "a3c71f7bd0f8a4a3c8f3751ed0f4af7c7699a37224c11cbf047c78bd449eaf12",
"MacAddress": "02:42:0a:00:07:4e",
"IPv4Address": "10.0.7.78/24",
"IPv6Address": ""
},
"718f1c9d699955775c882f3593661520d5f188dffa1e967e2ea3a73b10a75d25": {
"Name": "dockeruser_ws_alerts_3",
"EndpointID": "4815b8ca084d32ad4573e6d302b5a2cce3166c58ca78f4e404cf8d5e9743b13e",
"MacAddress": "02:42:0a:00:07:76",
"IPv4Address": "10.0.7.118/24",
"IPv6Address": ""
},
"73e4055fdd8b3f6628aaf0c01ca6e72620b7bd090d2c6713e63a5ff58588a23a": {
"Name": "dockeruser_redis_1",
"EndpointID": "7f454e45ee46a97d8a1f93f96020347b75d797f179cc6ccca695cb101cca1bc4",
"MacAddress": "02:42:0a:00:07:19",
"IPv4Address": "10.0.7.25/24",
"IPv6Address": ""
},
"7833eb5a905706458abcea226241a83bd416f6fbd5bf263b6a9284c1062aa757": {
"Name": "dockeruser_ws_alerts_2",
"EndpointID": "f33ab74e89c179c10b7a5cf9bdde23654034bb2387c8456b6c28962b2e01d4be",
"MacAddress": "02:42:0a:00:07:7b",
"IPv4Address": "10.0.7.123/24",
"IPv6Address": ""
},
"7bb6c8f52d933f5b898a313abb24483a70e92c4ea4ecce5126ab61f0b608d206": {
"Name": "dockeruser_ws_queries_10",
"EndpointID": "3a693d7362f97e5976b985678de7dae33abcf0a3cdd9759088d0beaa5eab8146",
"MacAddress": "02:42:0a:00:07:9a",
"IPv4Address": "10.0.7.154/24",
"IPv6Address": ""
},
"7bd91ec4796f46580f401d2b5c212526bf1fabbc24cd0c7ba514bee1445e3126": {
"Name": "dockeruser_ws_alerts_9",
"EndpointID": "31cadf63d48f93a50586f5d5f006089f371806900a1327b0f74a6fd4307e1e5f",
"MacAddress": "02:42:0a:00:07:7a",
"IPv4Address": "10.0.7.122/24",
"IPv6Address": ""
},
"7c23688e17d3121dcb337109cc6b244c7549a2c6a0bdad4104cfa656a6421b5d": {
"Name": "dockeruser_elasticdata_1",
"EndpointID": "022c2f03baec24b42760635f3a3766c87b28e9e004392f2e0bd1fdbf58a1d367",
"MacAddress": "02:42:0a:00:07:13",
"IPv4Address": "10.0.7.19/24",
"IPv6Address": ""
},
"7c2bef2178b6a26c9c26a677a0ad027bf69c2498a676564f86916dd37ebe9b46": {
"Name": "dockeruser_elasticdata_3",
"EndpointID": "6c3ac83d2247b476fa89d6843d61d040181e64ab7bcbe2c664546edf5e1f69c9",
"MacAddress": "02:42:0a:00:07:14",
"IPv4Address": "10.0.7.20/24",
"IPv6Address": ""
},
"7cd4cec9b12524159b515905524ded32304d4dee773552501a9537a4f5359741": {
"Name": "dockeruser_elasticdata_9",
"EndpointID": "99b5af90453fb4ac74634fac852539d2275b8a3d2dd48d28cecae766c11e4bf0",
"MacAddress": "02:42:0a:00:07:0d",
"IPv4Address": "10.0.7.13/24",
"IPv6Address": ""
},
"8069a1f5738338051017a42d430eb00c861793f55961f11149b9f111def2b607": {
"Name": "dockeruser_couchbase2_1",
"EndpointID": "74a2615078892c3b5536f5b9af0fcbee310698d1fa3646be4a637147d8bb492d",
"MacAddress": "02:42:0a:00:07:05",
"IPv4Address": "10.0.7.5/24",
"IPv6Address": ""
},
"82230c21ced83f7452b3770471748e0aae8d5343d2716f0304093e4499a15070": {
"Name": "dockeruser_statsingest_7",
"EndpointID": "5e9aa90420501519b137d0af2ecedb421d0f6bcb8f9b37b574ff7bc81d10e61d",
"MacAddress": "02:42:0a:00:07:3a",
"IPv4Address": "10.0.7.58/24",
"IPv6Address": ""
},
"837cb674cb1ea118c7ca5b1d0deecf2cb47822a17fe0b67f7a0a6b3767b37759": {
"Name": "dockeruser_categorization_14",
"EndpointID": "03dbc22a0d612572a3b5396d62cac7a6cd0ca61d8b6b591baf9537e18f11cecc",
"MacAddress": "02:42:0a:00:07:2c",
"IPv4Address": "10.0.7.44/24",
"IPv6Address": ""
},
"839fabd2afda284a92bc74f38247644d17296e3170f56f369c405bb9065c0f4e": {
"Name": "dockeruser_logsingest_2",
"EndpointID": "a569aa2cc7610e5b8537eae8d51ed9226f2e70d855ee0d881160c70a6b6d00d9",
"MacAddress": "02:42:0a:00:07:59",
"IPv4Address": "10.0.7.89/24",
"IPv6Address": ""
},
"84c2d031e368ca56e9ecabf87c200a57583f3472a3decc158b1a633ad5753924": {
"Name": "dockeruser_categorization_1",
"EndpointID": "6f648b37874d2ab6c43663351314e14f0d8d135eb758d1b7263acfdfd1192f69",
"MacAddress": "02:42:0a:00:07:39",
"IPv4Address": "10.0.7.57/24",
"IPv6Address": ""
},
"8cd5748184884d3827cda2e57820a27dceb97c21661e817006300738edf47d04": {
"Name": "dockeruser_statsingest_2",
"EndpointID": "a6558e3b3e8d432f772b8b50117a71586e14fca7d63ef9284784fbdeb8c6d43a",
"MacAddress": "02:42:0a:00:07:3d",
"IPv4Address": "10.0.7.61/24",
"IPv6Address": ""
},
"9536fc4e78c9de3c10d1fb149370492f0d88ce332a0b9667906ebff4f42f231c": {
"Name": "dockeruser_webapi_6",
"EndpointID": "f73a5e6f42073e5989d471561551b754d462a8b30c91e9aa79dd6c20097a33a2",
"MacAddress": "02:42:0a:00:07:6f",
"IPv4Address": "10.0.7.111/24",
"IPv6Address": ""
},
"96494dbd3543e476660bb90d6562601a98a1ae69a69c4529104e2468c216a633": {
"Name": "dockeruser_anomaly_1",
"EndpointID": "c2779b9c35100cc5c2dafec5de001814f9006f33551c4fec797f855c95fc6f2d",
"MacAddress": "02:42:0a:00:07:6a",
"IPv4Address": "10.0.7.106/24",
"IPv6Address": ""
},
"965ae4764f3b4d5336d3bb552b6d201c042c93d300591e3d3fbfa01203cdaf1b": {
"Name": "dockeruser_shipyard_1",
"EndpointID": "9103c637c746b6611cc6e96ff6900740af6a1021ee8135da293bda45d7f84ed7",
"MacAddress": "02:42:0a:00:07:94",
"IPv4Address": "10.0.7.148/24",
"IPv6Address": ""
},
"97595036c290926e8d094cf1eb9f5e06dddbe17b5c0f590384e549fbda17dd3f": {
"Name": "dockeruser_logsingest_8",
"EndpointID": "c865e886d0dace6897b7fb0e07a2af692e0be186861cf1d91ccb0b79d3e776ba",
"MacAddress": "02:42:0a:00:07:56",
"IPv4Address": "10.0.7.86/24",
"IPv6Address": ""
},
"98683bad0cac1e16e61f29e6a32a9b0b5cbe04c237c829768e26fdd356bf1f7a": {
"Name": "dockeruser_webapi_3",
"EndpointID": "651549214e2eeb94449636b125015c320fbe5d2d3475d7d4cf4659d13e87331e",
"MacAddress": "02:42:0a:00:07:71",
"IPv4Address": "10.0.7.113/24",
"IPv6Address": ""
},
"9e780c4bd30bc32d071d1b42f77fe60459145f8e9334ba5c4960132e33b83f11": {
"Name": "dockeruser_ws_maintenance_5",
"EndpointID": "dd9e2b1a56e77a59407b193d38af57f97e505c314921d4abc1c5480b047d8291",
"MacAddress": "02:42:0a:00:07:91",
"IPv4Address": "10.0.7.145/24",
"IPv6Address": ""
},
"9ff9bc40018297469d9392d769fe1d187e9826d1c4255d6d66d4db15ee3ff60f": {
"Name": "dockeruser_ws_statistics_9",
"EndpointID": "7aef85cac215864d18855972a39d811d5247fedb9621c27f27b37a6ef97151af",
"MacAddress": "02:42:0a:00:07:83",
"IPv4Address": "10.0.7.131/24",
"IPv6Address": ""
},
"a2578c5073fa1522fa8dd7d4b793a37d66c268d015823d608bdcd20bd55d603e": {
"Name": "dockeruser_webapi_1",
"EndpointID": "52aeb996ec6336e2beecc86990ac83217668baac9ae566017c9a8d9f27088391",
"MacAddress": "02:42:0a:00:07:6d",
"IPv4Address": "10.0.7.109/24",
"IPv6Address": ""
},
"a2a3903f54b5b7347a6272a8795adf5026227e32053cbb2442279562a47df003": {
"Name": "dockeruser_webfrontend_1",
"EndpointID": "58f3b1d49241a55a4e8991d7a3f889d27634ced4dcbb7b4347b86f1e9161baa5",
"MacAddress": "02:42:0a:00:07:8e",
"IPv4Address": "10.0.7.142/24",
"IPv6Address": ""
},
"a381d67be88fea795c4e98b3b251722c6b0aa8e59d781b3ea524fdc88ffe9e7a": {
"Name": "dockeruser_kafka_2",
"EndpointID": "7c2cf08b91c4d479176ed24d27184b285979b1400587053cd1490e8c3ff7a629",
"MacAddress": "02:42:0a:00:07:08",
"IPv4Address": "10.0.7.8/24",
"IPv6Address": ""
},
"a8301c1c44477f45447e51fbc890e449eec0c90a2d76b829f0e42b568ecad82d": {
"Name": "dockeruser_statsingest_19",
"EndpointID": "252249b7ce121a2a15f9ff2230db8c20d6e3f3523aed14abec13b050a88ea963",
"MacAddress": "02:42:0a:00:07:4a",
"IPv4Address": "10.0.7.74/24",
"IPv6Address": ""
},
"a85a977d07b5193d1bd307a513871f0ce82fd389d02e2755b313e26d2d572078": {
"Name": "dockeruser_logsingest_12",
"EndpointID": "8a03cc28e97a886f93b15a8dc6a19afa46e71028d81be05cb2145128611bff1a",
"MacAddress": "02:42:0a:00:07:58",
"IPv4Address": "10.0.7.88/24",
"IPv6Address": ""
},
"a8768e5608c9091556906f8ad4fbf560eeeba0cbe1518c724fceee5f8e5c232b": {
"Name": "dockeruser_counters_general_1",
"EndpointID": "a67b8df3a6e0a803600a8973c79976cf17d40dbe3262d54d75452581548dbfb1",
"MacAddress": "02:42:0a:00:07:65",
"IPv4Address": "10.0.7.101/24",
"IPv6Address": ""
},
"aa9a6f037dc7ee7c0a439a9fe7576d6ab975521b95bf2747491ed6d70c25cd36": {
"Name": "dockeruser_logsingest_5",
"EndpointID": "1bd000392f501e37e72c6d4bc47868cfe90e29167c67f3d160edfe2c129801fb",
"MacAddress": "02:42:0a:00:07:62",
"IPv4Address": "10.0.7.98/24",
"IPv6Address": ""
},
"ab9b61c235ce7c04b0a49a1fb99aa46e7bd3e6d2f7d30934b2c67dfd3873ec19": {
"Name": "dockeruser_elasticclient_2",
"EndpointID": "334bc59d11df382e3c5fd385b8ff59e45d93f96669eeaccb1995d2d59cd764f2",
"MacAddress": "02:42:0a:00:07:1a",
"IPv4Address": "10.0.7.26/24",
"IPv6Address": ""
},
"ac59d62906049d39433bb3e15e93ce71e6bf9fa683227b764d64aa24a4b1d76d": {
"Name": "dockeruser_webapi_4",
"EndpointID": "22e513b468ca9a313db6709154698a9736ca01b975d186bdca1ac2dc43e1bf5c",
"MacAddress": "02:42:0a:00:07:72",
"IPv4Address": "10.0.7.114/24",
"IPv6Address": ""
},
"ae3cc9af32d9783cd53021719d4a6f252f90d18687e52b6d65040ac39c986445": {
"Name": "dockeruser_categorization_5",
"EndpointID": "3d1611c881ae41559f599f31693066e5bea0d4e8b17a100fb86b0d89e8ec7ca8",
"MacAddress": "02:42:0a:00:07:34",
"IPv4Address": "10.0.7.52/24",
"IPv6Address": ""
},
"af65fd2642181a7104a44748ed7d974e1d941a870ae6914fbada532b86c4b349": {
"Name": "dockeruser_rethinkdb_1",
"EndpointID": "2b85ca1ee098e16cc2ad33f627e339b35bbc625cfc19079f4bd821e5a198dfad",
"MacAddress": "02:42:0a:00:07:1b",
"IPv4Address": "10.0.7.27/24",
"IPv6Address": ""
},
"b3f681ff3d258278a9caf6a619fa0a5efa5a9454aa52632b055219744f601a2c": {
"Name": "dockeruser_statsingest_11",
"EndpointID": "92980f6fe1d0de4ec95e36fb6204488445a1a17e17e019e054457604900081de",
"MacAddress": "02:42:0a:00:07:49",
"IPv4Address": "10.0.7.73/24",
"IPv6Address": ""
},
"b4585f9c6310d22248eb826c90c747390f1836a49c1d4c4131b1179cc4a64f62": {
"Name": "dockeruser_notificationmanager_3",
"EndpointID": "0518d2e26e5c3243a11b2a6eaeeba5ed0233b1b651a516d142234982980210ac",
"MacAddress": "02:42:0a:00:07:a4",
"IPv4Address": "10.0.7.164/24",
"IPv6Address": ""
},
"b8dc8dd4c308647e6789881ae77d8676f046adcf360eb633fa91991d50dfff49": {
"Name": "dockeruser_ws_queries_8",
"EndpointID": "3e8eeb167d06e2584c12a6c7822348f45a58db412d2b7f31e11baf6760860fd2",
"MacAddress": "02:42:0a:00:07:9b",
"IPv4Address": "10.0.7.155/24",
"IPv6Address": ""
},
"bcd2cc01b6f663708898f9abdb893e158acadedd87ac7b2912e6284d76f3dd28": {
"Name": "dockeruser_ws_alerts_7",
"EndpointID": "84f7cbafe4ece4cc1f5336542c2feb31b40f8298e19d368735734c044b2ea04e",
"MacAddress": "02:42:0a:00:07:7e",
"IPv4Address": "10.0.7.126/24",
"IPv6Address": ""
},
"bd0617d943bc3b265152365e8cfe070a0b3bdce06ad14353b102a6f453d77f79": {
"Name": "dockeruser_categorization_10",
"EndpointID": "c441f2aff9759d3c250186bbe0bcb8b476c82de946ceba9acddaa52b806f6f7b",
"MacAddress": "02:42:0a:00:07:33",
"IPv4Address": "10.0.7.51/24",
"IPv6Address": ""
},
"bfbed7c74f720101d59668c547f33f6280bff3e8f79975098a3cff17774aa377": {
"Name": "dockeruser_ws_statistics_4",
"EndpointID": "1d6cf1f4829204c2aea6d1033d662bef9a242f9d22821c0e5d9b107fd0a3fe74",
"MacAddress": "02:42:0a:00:07:86",
"IPv4Address": "10.0.7.134/24",
"IPv6Address": ""
},
"c155de0f0488b65dc4db12ec0bbcb302a05857c26f20a9772367ff66b08fa65f": {
"Name": "dockeruser_logsingest_4",
"EndpointID": "25ce74632a6350b9aa06e40dce6a0da9a318449e99d3fbdc093ccfb86f52ffaf",
"MacAddress": "02:42:0a:00:07:57",
"IPv4Address": "10.0.7.87/24",
"IPv6Address": ""
},
"c1586d159b5f1568546c920976ac810a4733ed33fcc8e8675d5a2e52b7b8dc05": {
"Name": "dockeruser_mysql_1",
"EndpointID": "fc4b34a19dd2e961e620dba9079bf62e27e12b6a5ffcfc7d3c58f49c2b3295da",
"MacAddress": "02:42:0a:00:07:02",
"IPv4Address": "10.0.7.2/24",
"IPv6Address": ""
},
"c3266baf39b20e2f4973b3c419bc0e682f1da27ab1a22f15c766c283f8273236": {
"Name": "dockeruser_ws_maintenance_4",
"EndpointID": "d9c786da0b9e178fc3ba1a9e001e742c7694b909c4208d4eb4547c405fb3135a",
"MacAddress": "02:42:0a:00:07:93",
"IPv4Address": "10.0.7.147/24",
"IPv6Address": ""
},
"c3ea1e2ece639e47e5bbe06d46c5b01d7220bd42ce03b240ce7ae679b31a3b7a": {
"Name": "dockeruser_tasksmanager_1",
"EndpointID": "6cbb6e3d3518368ee8092510ebbdaa620569cf73ba4cc07dcfc2d1b8ce1af8d0",
"MacAddress": "02:42:0a:00:07:4f",
"IPv4Address": "10.0.7.79/24",
"IPv6Address": ""
},
"c4d103e5e15c6f2d142f4c12c1c6096d4f774c570d54a041053f83aa50ffd63f": {
"Name": "dockeruser_ws_queries_7",
"EndpointID": "68a232faad08f2164dfcf55156e0648acb833c72a840ebc376cd0323f4615f31",
"MacAddress": "02:42:0a:00:07:99",
"IPv4Address": "10.0.7.153/24",
"IPv6Address": ""
},
"c5ef2f8b12f6f863ed0908d73db7d861719e86e9e32eff4640417af002cd8816": {
"Name": "dockeruser_ws_alerts_5",
"EndpointID": "9ca65a26785fd2df78db0288a8f024cc925a461afe96e1a7714618380bd2e372",
"MacAddress": "02:42:0a:00:07:77",
"IPv4Address": "10.0.7.119/24",
"IPv6Address": ""
},
"c78194f0b43cce05d40030620341e4738a77b1ef466c1e155cc6845ac4d5b237": {
"Name": "dockeruser_categorization_12",
"EndpointID": "7b9d060cdb9e06a160f8ed33ff38778246898f1f96165c652c7d11c2409edf06",
"MacAddress": "02:42:0a:00:07:2b",
"IPv4Address": "10.0.7.43/24",
"IPv6Address": ""
},
"ca6882b8b386da209ada63859b7e147dab44f2eae90cad2816257287b32fc243": {
"Name": "dockeruser_logsingest_11",
"EndpointID": "be5764498e5dee6c63c5c9e14465e5386eae39b9e6607c01fa18f461490ba705",
"MacAddress": "02:42:0a:00:07:54",
"IPv4Address": "10.0.7.84/24",
"IPv6Address": ""
},
"ca9066e2d5277750287d8229c724ea305ccc6c4ecaf79248c71d1e001e57f847": {
"Name": "dockeruser_elasticdata_4",
"EndpointID": "17ce11be7f05158ceb9143ee937e8db05c211978070c13a34bc3b3beaafe8dc2",
"MacAddress": "02:42:0a:00:07:12",
"IPv4Address": "10.0.7.18/24",
"IPv6Address": ""
},
"ce0288d26d27c14b30bbd865950e6170cee4fd64feb458726d6c4f2e5a5af474": {
"Name": "dockeruser_statsingest_13",
"EndpointID": "b6160a04666aa42b00ad0a5521ea924f238cdd09c2f04414d4b01a3f5fc2b38b",
"MacAddress": "02:42:0a:00:07:3e",
"IPv4Address": "10.0.7.62/24",
"IPv6Address": ""
},
"cf03140ba145fb075949314e9d85e67feb627ddb5c14fa5fbda7b1dc8ca75643": {
"Name": "dockeruser_ws_maintenance_10",
"EndpointID": "8e46c920cc20bb4e2f29c00acbfac2508209521f0728959231e57f5e87dd0028",
"MacAddress": "02:42:0a:00:07:8b",
"IPv4Address": "10.0.7.139/24",
"IPv6Address": ""
},
"cf963cf72a0a1d626504ac110e2ad82af4bdc5a1159eef2f1d25f18a472d1f3d": {
"Name": "dockeruser_statsingest_18",
"EndpointID": "83709be2fbc0173e3e729d445f0e14f0137db50ad0dc6c329a54bbbc2cdb39ff",
"MacAddress": "02:42:0a:00:07:47",
"IPv4Address": "10.0.7.71/24",
"IPv6Address": ""
},
"cfd54d488a16c22809bc94d3f81dc85630855488ac278d31ebbd133da67e132c": {
"Name": "dockeruser_kafka_3",
"EndpointID": "13f4c20ccebeaf9570cb63b6a67ede8989e3f8ef6b54b1d683597a13a1bcbd82",
"MacAddress": "02:42:0a:00:07:16",
"IPv4Address": "10.0.7.22/24",
"IPv6Address": ""
},
"d0edbef49b9b885df7a408090592377f9ae6150218d5880f479d36df570a7811": {
"Name": "dockeruser_ws_statistics_1",
"EndpointID": "2cf9972b9320fb3947fee37bdb30a75e41d180a5d4972d81c5f574d00f7e5d2a",
"MacAddress": "02:42:0a:00:07:89",
"IPv4Address": "10.0.7.137/24",
"IPv6Address": ""
},
"d3816d9676387449cff899f0b00157255e2df07838d2d349c3acdc9d6747225f": {
"Name": "dockeruser_batchprocessing_1",
"EndpointID": "e8151a2accda0948dfcfba2a5c485649b298487fcf1ad255891164100498ef5d",
"MacAddress": "02:42:0a:00:07:23",
"IPv4Address": "10.0.7.35/24",
"IPv6Address": ""
},
"d5b0646c7aa7a0110132277fb1a73933664e254dad425056e57fb597079ffc5f": {
"Name": "dockeruser_ws_alerts_8",
"EndpointID": "9b3704e28f71b384b453a6159f9c57452143227d7516d4ec05a035bd99470fac",
"MacAddress": "02:42:0a:00:07:7f",
"IPv4Address": "10.0.7.127/24",
"IPv6Address": ""
},
"d719325dac758042ded6a9facc490d6da7dba58220b0f7306ceea632584b252a": {
"Name": "dockeruser_counters_general_rawlogs_2",
"EndpointID": "ba6824708b1c1b3217dfab2616dc40ad2232c87b5b0174060ed4a39435ae5fb6",
"MacAddress": "02:42:0a:00:07:68",
"IPv4Address": "10.0.7.104/24",
"IPv6Address": ""
},
"d74fe47e4362be6ced2ee17c30cf3063860ba1e47c7a0b2cbec9fd8b1d3a191e": {
"Name": "dockeruser_counters_general_rawlogs_1",
"EndpointID": "fb5d3d8577452f1740bdcd003c3edf72f66d23bd41fc2374c7d7775f4dec1665",
"MacAddress": "02:42:0a:00:07:69",
"IPv4Address": "10.0.7.105/24",
"IPv6Address": ""
},
"d8af4416bf5f30490c684e51261ac767f75f17231b9b811e7ad7ab607e288d24": {
"Name": "dockeruser_couchbase1_1",
"EndpointID": "caccff5b996f38cb693d01050a837b5c694d0c31a404636232569b622d402811",
"MacAddress": "02:42:0a:00:07:36",
"IPv4Address": "10.0.7.54/24",
"IPv6Address": ""
},
"d983d853dd298b1df77644a76a7e0a318f92bd2e5873d02952082fa4438a1424": {
"Name": "dockeruser_elasticclient_3",
"EndpointID": "8f3d13123add13ba583af827fc95956897444d09a423c6551b7163fad2e5cbc2",
"MacAddress": "02:42:0a:00:07:18",
"IPv4Address": "10.0.7.24/24",
"IPv6Address": ""
},
"db81210d26109a2c670e89986f420e491144297ac1b5ce47129c42ddbfe40778": {
"Name": "dockeruser_ws_statistics_2",
"EndpointID": "29762d19b7f59d6b88b11a146d1754dae29304a3761a2f101e61bc3f4026b3a7",
"MacAddress": "02:42:0a:00:07:88",
"IPv4Address": "10.0.7.136/24",
"IPv6Address": ""
},
"dc4704441e489f0e36960697e3d09f6efd488bc9ada57f245ceb4a4b3c25a529": {
"Name": "dockeruser_ws_maintenance_8",
"EndpointID": "06cc95acb2ab8d0a39a2aadb05414fcbec03b514a167eff3b9eff8e66b9bdab7",
"MacAddress": "02:42:0a:00:07:90",
"IPv4Address": "10.0.7.144/24",
"IPv6Address": ""
},
"dd3b6343c232c21681a5c1c68d23b956792553e949e31a4d38a2b8d4a4942267": {
"Name": "dockeruser_categorization_17",
"EndpointID": "9a28b911f918e83c3e7412687076be758512e0a28232462c347e1f1d3aeb5920",
"MacAddress": "02:42:0a:00:07:27",
"IPv4Address": "10.0.7.39/24",
"IPv6Address": ""
},
"dfb23a6e783fe4523c61212a792f20262c8a6f8605a04503a31f98448844a565": {
"Name": "dockeruser_ws_queries_6",
"EndpointID": "3d10caa85203f004f5fba4c6c5050de1e9da0eb81b13d500256ba98807995d1b",
"MacAddress": "02:42:0a:00:07:97",
"IPv4Address": "10.0.7.151/24",
"IPv6Address": ""
},
"e2802bfd91430f0a4522679909e9912f4eb1726a5469b65ca3430927969b6b5e": {
"Name": "dockeruser_speedtestermaster_1",
"EndpointID": "2756c2bc2ea20f2a6ad797ee95cdd4fb13ae63ef7de6bfea991353ab7100d244",
"MacAddress": "02:42:0a:00:07:a1",
"IPv4Address": "10.0.7.161/24",
"IPv6Address": ""
},
"e2b350db1207ec13ab52c21abbf1c759c8d937f4664f76531422a4457539ec7a": {
"Name": "dockeruser_kafka_1",
"EndpointID": "29c19e111c905d15ab1f99b12bae5b12eaf77a228bf74fbb0b7d69f60c15ef3d",
"MacAddress": "02:42:0a:00:07:07",
"IPv4Address": "10.0.7.7/24",
"IPv6Address": ""
},
"e3a310a30e79be50018bea20f70d216930e6e0ccda81a25231a59d7bdeedc041": {
"Name": "dockeruser_speedtesterclient_1",
"EndpointID": "b90f2957c8e05da9e540e2f32d2482e055d39714f1a6ae4fe2edff6b3f273196",
"MacAddress": "02:42:0a:00:07:a2",
"IPv4Address": "10.0.7.162/24",
"IPv6Address": ""
},
"e5fa14d86b5e763edf873903c59b2b4e9ab0b0a8206e1811094f5785673b6517": {
"Name": "dockeruser_logsingest_10",
"EndpointID": "9635355c73de7af120de5b4c2b15a41120c1eb9a66296849f3297a4b51f961ed",
"MacAddress": "02:42:0a:00:07:5b",
"IPv4Address": "10.0.7.91/24",
"IPv6Address": ""
},
"e6a16cb897840b692dd324ee77a6173106243e2b21e6a03f5896636135aebf1e": {
"Name": "dockeruser_statsingest_12",
"EndpointID": "9638fce6ad005bc45aec278fd6dc855fb2669f9f848c216d129cdf285a98012f",
"MacAddress": "02:42:0a:00:07:3f",
"IPv4Address": "10.0.7.63/24",
"IPv6Address": ""
},
"e994fa5101cc124babc39929175224526235cc065c8b5472c1768a4f6dc64173": {
"Name": "dockeruser_statsingest_20",
"EndpointID": "b2845057b2568b4fc20289d87f8fe4edfe212b7ae8b7ec16eb0d21d9a093bdd3",
"MacAddress": "02:42:0a:00:07:4b",
"IPv4Address": "10.0.7.75/24",
"IPv6Address": ""
},
"ea4baee9aa4f9202721b807a199ab73e159373c6233b2219bf9a590555f8ef45": {
"Name": "dockeruser_ws_statistics_5",
"EndpointID": "2f259aac725ee4f6535ea207a602725a8121fb4008cdbda11c25ebf6578c502e",
"MacAddress": "02:42:0a:00:07:85",
"IPv4Address": "10.0.7.133/24",
"IPv6Address": ""
},
"eaa65696fd4c0b6ca8813f8621c92b36bd1216f89be87f367d5e578013fd5247": {
"Name": "dockeruser_logsingest_9",
"EndpointID": "43a033a1a67293ec7ad021253ec3d7649254a0e220f5d8395e9fe9432d2740d6",
"MacAddress": "02:42:0a:00:07:5d",
"IPv4Address": "10.0.7.93/24",
"IPv6Address": ""
},
"eedd44c557a55c54dde2b1b21e13a15a05e9f4c6b47a6a44379c10968f123c43": {
"Name": "dockeruser_batchprocessing_2",
"EndpointID": "1ad46e94edf6cb8e2594b2cfc3a332d13975548b4506f0475c8718c86ad210af",
"MacAddress": "02:42:0a:00:07:22",
"IPv4Address": "10.0.7.34/24",
"IPv6Address": ""
},
"efe815862bd75a39876d6b13273d6f501c5f1d66cfd1157500d81c2b55c62371": {
"Name": "dockeruser_elasticmaster_1",
"EndpointID": "92a75cfe829e1d42da52bcc44cb6cfff6266d2e04c9d0cbf1c2e6034837b20ff",
"MacAddress": "02:42:0a:00:07:0b",
"IPv4Address": "10.0.7.11/24",
"IPv6Address": ""
},
"ep-4bf64b09517791ba6207ff9ceef256f7180eebf1f9142a0629daee9493b887f9": {
"Name": "dockeruser_kafka-manager_1",
"EndpointID": "4bf64b09517791ba6207ff9ceef256f7180eebf1f9142a0629daee9493b887f9",
"MacAddress": "02:42:0a:00:07:1f",
"IPv4Address": "10.0.7.31/24",
"IPv6Address": ""
},
"ep-cd517c14d976d1d578b63371f4de29a9ec3f47aed718a0654cecb06f8b4cf309": {
"Name": "dockeruser_elasticdata_10",
"EndpointID": "cd517c14d976d1d578b63371f4de29a9ec3f47aed718a0654cecb06f8b4cf309",
"MacAddress": "02:42:0a:00:07:0c",
"IPv4Address": "10.0.7.12/24",
"IPv6Address": ""
},
"ep-ed1233f021951343aef5f1aee086dd9463f534814ece714f0b15c32ff45531a6": {
"Name": "dockeruser_tasksmanager_1",
"EndpointID": "ed1233f021951343aef5f1aee086dd9463f534814ece714f0b15c32ff45531a6",
"MacAddress": "02:42:0a:00:07:74",
"IPv4Address": "10.0.7.116/24",
"IPv6Address": ""
},
"f25f6e319b39aa3fe19573e348dc6d29ebf9972cea82ed198e4d8a9081480305": {
"Name": "dockeruser_wslb_2",
"EndpointID": "92fab6dfddaf508df0275c3ba05e54b2dccd9fc40f88c8e8b2053bfbb4031504",
"MacAddress": "02:42:0a:00:07:1d",
"IPv4Address": "10.0.7.29/24",
"IPv6Address": ""
},
"f3548bb5b159d234fd10d23a2ca6591af17d46235fd2bf2fb88014cea099c10f": {
"Name": "dockeruser_categorization_7",
"EndpointID": "3b20036ea871efb16bcfcb91aca9ee802081cd3c54fc942a1307c224199a74c2",
"MacAddress": "02:42:0a:00:07:30",
"IPv4Address": "10.0.7.48/24",
"IPv6Address": ""
},
"f66b72797c44f17c623ab53d4f6c1c7bb7627efa58487802de0239548fa68938": {
"Name": "dockeruser_ws_queries_2",
"EndpointID": "5bfe26074e762f642f4a952eab0d4cccac5d1713eeddf640d1c4366df91df467",
"MacAddress": "02:42:0a:00:07:a0",
"IPv4Address": "10.0.7.160/24",
"IPv6Address": ""
},
"f6a2719b28b0151c756f1cdaed82df463934080662846850743080cdb04d3852": {
"Name": "dockeruser_elasticdata_7",
"EndpointID": "fa6b78a6240ff22764f69490a86454faa172cdf248813eae225981a68ac57675",
"MacAddress": "02:42:0a:00:07:15",
"IPv4Address": "10.0.7.21/24",
"IPv6Address": ""
},
"fa68f7db774a94080327bf8beb4a4c2ff142495074e0545d77bd2a2f40d3b626": {
"Name": "dockeruser_ws_statistics_7",
"EndpointID": "18d8714aab66122b51e11a4025ea802ee111040d66e88d201062e1da3394597f",
"MacAddress": "02:42:0a:00:07:84",
"IPv4Address": "10.0.7.132/24",
"IPv6Address": ""
},
"fb3ee58480cfad5dad2f2e642c1f8b27d2add762a804853f696ae0450e850547": {
"Name": "dockeruser_ws_alerts_1",
"EndpointID": "8bbc88550563e84f5fb3136ca202d09cc9a01356075d3a9ad5dc356796bfffa0",
"MacAddress": "02:42:0a:00:07:78",
"IPv4Address": "10.0.7.120/24",
"IPv6Address": ""
},
"fc68f5414e78234b5ccceb7d0e6718244466ddb59ef58067f4b0e821c1953b53": {
"Name": "dockeruser_statsingest_3",
"EndpointID": "e9d72c68d56a8a6bbcdffaf28851787a6725cf68c4e1710687cb0bcda1b64a55",
"MacAddress": "02:42:0a:00:07:4c",
"IPv4Address": "10.0.7.76/24",
"IPv6Address": ""
},
"ffc2b71f578bb36e39505c40ffd6a8481159bf8054bb7c74fa695b45902ce142": {
"Name": "dockeruser_ws_queries_5",
"EndpointID": "df012f56334e324650e63382ac5ccab454f0bd98ac8cf1c968b1a9a163a99645",
"MacAddress": "02:42:0a:00:07:9e",
"IPv4Address": "10.0.7.158/24",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]

@mrjana
Copy link
Contributor

mrjana commented Oct 11, 2016

@groyee Just to confirm, you are not running swarm mode but you are using docker/swarm. Is that correct? Which means you are setting up the cluster using --cluster-store and --cluster-advertise options and are using an external k/v store. Is that correct? Can you please provide daemon logs from the node where the container which you couldn't ping was running?

@mrjana mrjana removed the area/swarm label Oct 11, 2016
@mrjana
Copy link
Contributor

mrjana commented Oct 11, 2016

@groyee Also please provide daemon logs from the node which was running the container from which you attempted the unsuccessful ping.

@groyee
Copy link
Author

groyee commented Oct 11, 2016

Yes, this is correct. No swarm mode. I am using Swarm stand alone + Consul, with --cluster-store and --cluster-advertise.

Here are the daemon logs since today. Please let me know if you need earlier daemon logs:

daemon logs from node running dockeruser_tasksmanager_1 container (the container that cannot ping)

docker-user@useralerts2-prod:~$ sudo journalctl -u docker.service --since today
-- Logs begin at Wed 2016-09-14 01:47:01 UTC, end at Tue 2016-10-11 17:48:56 UTC. --
Oct 11 00:39:36 useralerts2-prod docker[1146]: time="2016-10-11T00:39:36.251153234Z" level=info msg="2016/10/11 00:39:36 [INFO] serf: EventMemberFailed: batchprocessing1-prod 192.168.0.18\n"
Oct 11 00:40:10 useralerts2-prod docker[1146]: time="2016-10-11T00:40:10.232601603Z" level=info msg="2016/10/11 00:40:10 [INFO] serf: EventMemberJoin: batchprocessing1-prod 192.168.0.18\n"
Oct 11 01:17:36 useralerts2-prod docker[1146]: time="2016-10-11T01:17:36.099041989Z" level=info msg="2016/10/11 01:17:36 [INFO] memberlist: Marking batchprocessing1-prod as failed, suspect timeout reached\n"
Oct 11 01:17:36 useralerts2-prod docker[1146]: time="2016-10-11T01:17:36.099144193Z" level=info msg="2016/10/11 01:17:36 [INFO] serf: EventMemberFailed: batchprocessing1-prod 192.168.0.18\n"
Oct 11 01:17:41 useralerts2-prod docker[1146]: time="2016-10-11T01:17:41.394085366Z" level=info msg="2016/10/11 01:17:41 [INFO] serf: EventMemberJoin: batchprocessing1-prod 192.168.0.18\n"
Oct 11 14:11:11 useralerts2-prod docker[1146]: time="2016-10-11T14:11:11.030720515Z" level=info msg="2016/10/11 14:11:11 [INFO] serf: EventMemberFailed: webapi3-prod 192.168.0.31\n"
Oct 11 14:11:15 useralerts2-prod docker[1146]: time="2016-10-11T14:11:15.652945897Z" level=info msg="2016/10/11 14:11:15 [INFO] serf: EventMemberFailed: couchbase1-prod 192.168.0.21\n"
Oct 11 14:11:24 useralerts2-prod docker[1146]: time="2016-10-11T14:11:24.094641111Z" level=info msg="2016/10/11 14:11:24 [INFO] serf: EventMemberJoin: couchbase1-prod 192.168.0.21\n"
Oct 11 14:11:26 useralerts2-prod docker[1146]: time="2016-10-11T14:11:26.094084144Z" level=info msg="2016/10/11 14:11:26 [INFO] memberlist: Suspect statsingest7-prod has failed, no acks received\n"
Oct 11 14:11:28 useralerts2-prod docker[1146]: time="2016-10-11T14:11:28.505508708Z" level=info msg="2016/10/11 14:11:28 [INFO] serf: EventMemberFailed: statsingest7-prod 192.168.0.70\n"
Oct 11 14:11:29 useralerts2-prod docker[1146]: time="2016-10-11T14:11:29.108283471Z" level=info msg="2016/10/11 14:11:29 [INFO] serf: EventMemberJoin: statsingest7-prod 192.168.0.70\n"
Oct 11 14:11:29 useralerts2-prod docker[1146]: time="2016-10-11T14:11:29.749602093Z" level=info msg="2016/10/11 14:11:29 [INFO] serf: EventMemberFailed: webapi2-prod 192.168.0.30\n"
Oct 11 14:11:31 useralerts2-prod docker[1146]: time="2016-10-11T14:11:31.068815327Z" level=info msg="2016/10/11 14:11:31 [INFO] serf: EventMemberFailed: ws1-prod 192.168.0.34\n"
Oct 11 14:11:33 useralerts2-prod docker[1146]: time="2016-10-11T14:11:33.451408468Z" level=info msg="2016/10/11 14:11:33 [INFO] serf: EventMemberFailed: swarm-master1-prod 192.168.0.7\n"
Oct 11 14:11:33 useralerts2-prod docker[1146]: time="2016-10-11T14:11:33.783689088Z" level=info msg="2016/10/11 14:11:33 [INFO] serf: EventMemberFailed: logsingest3-prod 192.168.0.65\n"
Oct 11 14:11:34 useralerts2-prod docker[1146]: time="2016-10-11T14:11:34.723074022Z" level=info msg="2016/10/11 14:11:34 [INFO] serf: EventMemberFailed: elasticdata5-prod 192.168.0.48\n"
Oct 11 14:11:36 useralerts2-prod docker[1146]: time="2016-10-11T14:11:36.595342560Z" level=info msg="2016/10/11 14:11:36 [INFO] serf: EventMemberFailed: webapi1-prod 192.168.0.29\n"
Oct 11 14:11:37 useralerts2-prod docker[1146]: time="2016-10-11T14:11:37.388689242Z" level=info msg="2016/10/11 14:11:37 [INFO] serf: EventMemberJoin: webapi2-prod 192.168.0.30\n"
Oct 11 14:11:42 useralerts2-prod docker[1146]: time="2016-10-11T14:11:42.809738239Z" level=info msg="2016/10/11 14:11:42 [INFO] serf: EventMemberJoin: webapi1-prod 192.168.0.29\n"
Oct 11 14:11:46 useralerts2-prod docker[1146]: time="2016-10-11T14:11:46.439906766Z" level=info msg="2016/10/11 14:11:46 [INFO] serf: EventMemberJoin: elasticdata5-prod 192.168.0.48\n"
Oct 11 14:11:46 useralerts2-prod docker[1146]: time="2016-10-11T14:11:46.550359061Z" level=info msg="2016/10/11 14:11:46 [INFO] serf: EventMemberJoin: logsingest3-prod 192.168.0.65\n"
Oct 11 14:11:49 useralerts2-prod docker[1146]: time="2016-10-11T14:11:49.668897538Z" level=info msg="2016/10/11 14:11:49 [INFO] serf: EventMemberJoin: swarm-master1-prod 192.168.0.7\n"
Oct 11 14:12:07 useralerts2-prod docker[1146]: time="2016-10-11T14:12:07.343261156Z" level=info msg="2016/10/11 14:12:07 [INFO] serf: EventMemberJoin: webapi3-prod 192.168.0.31\n"
Oct 11 14:12:09 useralerts2-prod docker[1146]: time="2016-10-11T14:12:09.549938256Z" level=info msg="2016/10/11 14:12:09 [INFO] memberlist: Marking elasticdata5-prod as failed, suspect timeout reached\n"
Oct 11 14:12:09 useralerts2-prod docker[1146]: time="2016-10-11T14:12:09.550446174Z" level=info msg="2016/10/11 14:12:09 [INFO] serf: EventMemberFailed: elasticdata5-prod 192.168.0.48\n"
Oct 11 14:12:24 useralerts2-prod docker[1146]: time="2016-10-11T14:12:24.639152596Z" level=info msg="2016/10/11 14:12:24 [INFO] serf: EventMemberJoin: ws1-prod 192.168.0.34\n"
Oct 11 14:12:50 useralerts2-prod docker[1146]: time="2016-10-11T14:12:50.701009828Z" level=info msg="2016/10/11 14:12:50 [INFO] serf: EventMemberJoin: elasticdata5-prod 192.168.0.48\n"
Oct 11 14:22:55 useralerts2-prod docker[1146]: time="2016-10-11T14:22:55.149577939Z" level=info msg="2016/10/11 14:22:55 [INFO] serf: EventMemberFailed: useralerts1-prod 192.168.0.9\n"
Oct 11 14:22:56 useralerts2-prod docker[1146]: time="2016-10-11T14:22:56.192570444Z" level=info msg="2016/10/11 14:22:56 [INFO] serf: EventMemberFailed: elasticclient3-prod 192.168.0.28\n"
Oct 11 14:22:56 useralerts2-prod docker[1146]: time="2016-10-11T14:22:56.669127070Z" level=info msg="2016/10/11 14:22:56 [INFO] serf: EventMemberJoin: elasticclient3-prod 192.168.0.28\n"
Oct 11 14:22:56 useralerts2-prod docker[1146]: time="2016-10-11T14:22:56.825706665Z" level=info msg="2016/10/11 14:22:56 [INFO] serf: EventMemberJoin: useralerts1-prod 192.168.0.9\n"
Oct 11 15:06:29 useralerts2-prod docker[1146]: time="2016-10-11T15:06:29.399388516Z" level=error msg="Peer delete failed in the driver: could not delete fdb entry into the sandbox: could not find the neighbor entry to delete\n"
Oct 11 17:44:29 useralerts2-prod docker[1146]: time="2016-10-11T17:44:29.033182878Z" level=info msg="2016/10/11 17:44:29 [INFO] serf: EventMemberFailed: webapi2-prod 192.168.0.30\n"
Oct 11 17:44:29 useralerts2-prod docker[1146]: time="2016-10-11T17:44:29.983200995Z" level=info msg="2016/10/11 17:44:29 [INFO] serf: EventMemberJoin: webapi2-prod 192.168.0.30\n"
Oct 11 17:44:32 useralerts2-prod docker[1146]: time="2016-10-11T17:44:32.030003246Z" level=info msg="2016/10/11 17:44:32 [INFO] serf: EventMemberFailed: useralerts1-prod 192.168.0.9\n"
Oct 11 17:44:37 useralerts2-prod docker[1146]: time="2016-10-11T17:44:37.555296456Z" level=info msg="2016/10/11 17:44:37 [INFO] serf: attempting reconnect to useralerts1-prod 192.168.0.9:7946\n"
Oct 11 17:44:37 useralerts2-prod docker[1146]: time="2016-10-11T17:44:37.769924338Z" level=info msg="2016/10/11 17:44:37 [INFO] serf: EventMemberJoin: useralerts1-prod 192.168.0.9\n"

daemon logs from node running dockeruser_kafka_1 container (the container to which i cannot ping from the above container)

docker-user@kafka1-prod:~$ sudo journalctl -u docker.service --since today
-- Logs begin at Mon 2016-10-10 09:22:33 UTC, end at Tue 2016-10-11 17:53:26 UTC. --
Oct 11 00:39:36 kafka1-prod docker[891]: time="2016-10-11T00:39:36.184946496Z" level=info msg="2016/10/11 00:39:36 [INFO] memberlist: Marking batchprocessing1-prod as failed, suspect timeout reached\n"
Oct 11 00:39:36 kafka1-prod docker[891]: time="2016-10-11T00:39:36.186307727Z" level=info msg="2016/10/11 00:39:36 [INFO] serf: EventMemberFailed: batchprocessing1-prod 192.168.0.18\n"
Oct 11 00:40:10 kafka1-prod docker[891]: time="2016-10-11T00:40:10.319173090Z" level=info msg="2016/10/11 00:40:10 [INFO] serf: EventMemberJoin: batchprocessing1-prod 192.168.0.18\n"
Oct 11 01:17:36 kafka1-prod docker[891]: time="2016-10-11T01:17:36.177662732Z" level=info msg="2016/10/11 01:17:36 [INFO] memberlist: Marking batchprocessing1-prod as failed, suspect timeout reached\n"
Oct 11 01:17:36 kafka1-prod docker[891]: time="2016-10-11T01:17:36.179122465Z" level=info msg="2016/10/11 01:17:36 [INFO] serf: EventMemberFailed: batchprocessing1-prod 192.168.0.18\n"
Oct 11 01:17:41 kafka1-prod docker[891]: time="2016-10-11T01:17:41.406466896Z" level=info msg="2016/10/11 01:17:41 [INFO] serf: EventMemberJoin: batchprocessing1-prod 192.168.0.18\n"
Oct 11 14:11:11 kafka1-prod docker[891]: time="2016-10-11T14:11:11.073803374Z" level=info msg="2016/10/11 14:11:11 [INFO] serf: EventMemberFailed: webapi3-prod 192.168.0.31\n"
Oct 11 14:11:15 kafka1-prod docker[891]: time="2016-10-11T14:11:15.651357789Z" level=info msg="2016/10/11 14:11:15 [INFO] serf: EventMemberFailed: couchbase1-prod 192.168.0.21\n"
Oct 11 14:11:24 kafka1-prod docker[891]: time="2016-10-11T14:11:24.517309897Z" level=info msg="2016/10/11 14:11:24 [INFO] serf: EventMemberJoin: couchbase1-prod 192.168.0.21\n"
Oct 11 14:11:28 kafka1-prod docker[891]: time="2016-10-11T14:11:28.432764911Z" level=info msg="2016/10/11 14:11:28 [INFO] serf: EventMemberFailed: statsingest7-prod 192.168.0.70\n"
Oct 11 14:11:28 kafka1-prod docker[891]: time="2016-10-11T14:11:28.791164666Z" level=info msg="2016/10/11 14:11:28 [INFO] serf: EventMemberFailed: anomaly1-prod 192.168.0.20\n"
Oct 11 14:11:29 kafka1-prod docker[891]: time="2016-10-11T14:11:29.093903286Z" level=info msg="2016/10/11 14:11:29 [INFO] serf: EventMemberJoin: statsingest7-prod 192.168.0.70\n"
Oct 11 14:11:29 kafka1-prod docker[891]: time="2016-10-11T14:11:29.197547587Z" level=info msg="2016/10/11 14:11:29 [INFO] serf: EventMemberJoin: anomaly1-prod 192.168.0.20\n"
Oct 11 14:11:29 kafka1-prod docker[891]: time="2016-10-11T14:11:29.727482750Z" level=info msg="2016/10/11 14:11:29 [INFO] serf: EventMemberFailed: webapi2-prod 192.168.0.30\n"
Oct 11 14:11:31 kafka1-prod docker[891]: time="2016-10-11T14:11:31.093208366Z" level=info msg="2016/10/11 14:11:31 [INFO] serf: EventMemberFailed: ws1-prod 192.168.0.34\n"
Oct 11 14:11:33 kafka1-prod docker[891]: time="2016-10-11T14:11:33.348174020Z" level=info msg="2016/10/11 14:11:33 [INFO] serf: EventMemberFailed: swarm-master1-prod 192.168.0.7\n"
Oct 11 14:11:33 kafka1-prod docker[891]: time="2016-10-11T14:11:33.649839516Z" level=info msg="2016/10/11 14:11:33 [INFO] serf: EventMemberFailed: logsingest3-prod 192.168.0.65\n"
Oct 11 14:11:34 kafka1-prod docker[891]: time="2016-10-11T14:11:34.793668105Z" level=info msg="2016/10/11 14:11:34 [INFO] serf: EventMemberFailed: elasticdata5-prod 192.168.0.48\n"
Oct 11 14:11:36 kafka1-prod docker[891]: time="2016-10-11T14:11:36.321806125Z" level=info msg="2016/10/11 14:11:36 [INFO] memberlist: Marking webapi1-prod as failed, suspect timeout reached\n"
Oct 11 14:11:36 kafka1-prod docker[891]: time="2016-10-11T14:11:36.322067131Z" level=info msg="2016/10/11 14:11:36 [INFO] serf: EventMemberFailed: webapi1-prod 192.168.0.29\n"
Oct 11 14:11:37 kafka1-prod docker[891]: time="2016-10-11T14:11:37.387189773Z" level=info msg="2016/10/11 14:11:37 [INFO] serf: EventMemberJoin: webapi2-prod 192.168.0.30\n"
Oct 11 14:11:42 kafka1-prod docker[891]: time="2016-10-11T14:11:42.673403509Z" level=info msg="2016/10/11 14:11:42 [INFO] serf: EventMemberJoin: webapi1-prod 192.168.0.29\n"
Oct 11 14:11:46 kafka1-prod docker[891]: time="2016-10-11T14:11:46.338349557Z" level=info msg="2016/10/11 14:11:46 [INFO] serf: EventMemberJoin: logsingest3-prod 192.168.0.65\n"
Oct 11 14:11:49 kafka1-prod docker[891]: time="2016-10-11T14:11:49.853060869Z" level=info msg="2016/10/11 14:11:49 [INFO] serf: EventMemberJoin: swarm-master1-prod 192.168.0.7\n"
Oct 11 14:12:07 kafka1-prod docker[891]: time="2016-10-11T14:12:07.327468517Z" level=info msg="2016/10/11 14:12:07 [INFO] serf: EventMemberJoin: webapi3-prod 192.168.0.31\n"
Oct 11 14:12:24 kafka1-prod docker[891]: time="2016-10-11T14:12:24.518368850Z" level=info msg="2016/10/11 14:12:24 [INFO] serf: EventMemberJoin: ws1-prod 192.168.0.34\n"
Oct 11 14:12:25 kafka1-prod docker[891]: time="2016-10-11T14:12:25.267334272Z" level=info msg="2016/10/11 14:12:25 [INFO] serf: EventMemberJoin: elasticdata5-prod 192.168.0.48\n"
Oct 11 14:22:55 kafka1-prod docker[891]: time="2016-10-11T14:22:55.148666073Z" level=info msg="2016/10/11 14:22:55 [INFO] serf: EventMemberFailed: useralerts1-prod 192.168.0.9\n"
Oct 11 14:22:56 kafka1-prod docker[891]: time="2016-10-11T14:22:56.024863296Z" level=info msg="2016/10/11 14:22:56 [INFO] serf: EventMemberFailed: elasticclient3-prod 192.168.0.28\n"
Oct 11 14:22:56 kafka1-prod docker[891]: time="2016-10-11T14:22:56.810871720Z" level=info msg="2016/10/11 14:22:56 [INFO] serf: EventMemberJoin: useralerts1-prod 192.168.0.9\n"
Oct 11 14:22:56 kafka1-prod docker[891]: time="2016-10-11T14:22:56.811155527Z" level=info msg="2016/10/11 14:22:56 [INFO] serf: EventMemberJoin: elasticclient3-prod 192.168.0.28\n"
Oct 11 15:06:29 kafka1-prod docker[891]: time="2016-10-11T15:06:29.558480121Z" level=error msg="Peer delete failed in the driver: could not delete fdb entry into the sandbox: could not find the neighbor entry to delete\n"
Oct 11 17:44:29 kafka1-prod docker[891]: time="2016-10-11T17:44:29.010980380Z" level=info msg="2016/10/11 17:44:29 [INFO] serf: EventMemberFailed: webapi2-prod 192.168.0.30\n"
Oct 11 17:44:29 kafka1-prod docker[891]: time="2016-10-11T17:44:29.950569496Z" level=info msg="2016/10/11 17:44:29 [INFO] serf: EventMemberJoin: webapi2-prod 192.168.0.30\n"
Oct 11 17:44:32 kafka1-prod docker[891]: time="2016-10-11T17:44:32.093713004Z" level=info msg="2016/10/11 17:44:32 [INFO] serf: EventMemberFailed: useralerts1-prod 192.168.0.9\n"
Oct 11 17:44:37 kafka1-prod docker[891]: time="2016-10-11T17:44:37.850870759Z" level=info msg="2016/10/11 17:44:37 [INFO] serf: EventMemberJoin: useralerts1-prod 192.168.0.9\n"

@groyee
Copy link
Author

groyee commented Oct 11, 2016

Here is one more interesting log:

I am running this on the node that cannot ping:

docker-user@useralerts2-prod:~$ sudo journalctl -u docker.service | grep kafka1-prod
Sep 14 01:48:25 useralerts2-prod docker[6767]: time="2016-09-14T01:48:25.034529237Z" level=info msg="2016/09/14 01:48:25 [INFO] serf: EventMemberJoin: kafka1-prod 192.168.0.22\n"
Sep 29 10:29:33 useralerts2-prod docker[6767]: time="2016-09-29T10:29:33.861437624Z" level=info msg="2016/09/29 10:29:33 [INFO] serf: EventMemberFailed: kafka1-prod 192.168.0.22\n"
Sep 29 10:29:41 useralerts2-prod docker[6767]: time="2016-09-29T10:29:41.400688478Z" level=info msg="2016/09/29 10:29:41 [INFO] serf: EventMemberJoin: kafka1-prod 192.168.0.22\n"
Oct 07 04:25:46 useralerts2-prod docker[6767]: time="2016-10-07T04:25:46.229751763Z" level=info msg="2016/10/07 04:25:46 [INFO] serf: EventMemberFailed: kafka1-prod 192.168.0.22\n"
Oct 07 04:25:46 useralerts2-prod docker[6767]: time="2016-10-07T04:25:46.932368722Z" level=info msg="2016/10/07 04:25:46 [INFO] serf: EventMemberJoin: kafka1-prod 192.168.0.22\n"
Oct 07 05:05:16 useralerts2-prod docker[6767]: time="2016-10-07T05:05:16.021448274Z" level=info msg="2016/10/07 05:05:16 [INFO] serf: EventMemberFailed: kafka1-prod 192.168.0.22\n"
Oct 07 05:05:38 useralerts2-prod docker[6767]: time="2016-10-07T05:05:38.745619563Z" level=info msg="2016/10/07 05:05:38 [INFO] serf: EventMemberJoin: kafka1-prod 192.168.0.22\n"
Oct 07 06:43:20 useralerts2-prod docker[6767]: time="2016-10-07T06:43:20.820754229Z" level=info msg="2016/10/07 06:43:20 [INFO] serf: EventMemberFailed: kafka1-prod 192.168.0.22\n"
Oct 07 06:43:58 useralerts2-prod docker[6767]: time="2016-10-07T06:43:58.476364701Z" level=info msg="2016/10/07 06:43:58 [INFO] serf: EventMemberJoin: kafka1-prod 192.168.0.22\n"
Oct 07 22:21:14 useralerts2-prod docker[54239]: time="2016-10-07T22:21:14.643665506Z" level=info msg="2016/10/07 22:21:14 [INFO] serf: EventMemberJoin: kafka1-prod 192.168.0.22\n"
Oct 10 09:21:58 useralerts2-prod docker[54239]: time="2016-10-10T09:21:58.632605205Z" level=info msg="2016/10/10 09:21:58 [INFO] memberlist: Suspect kafka1-prod has failed, no acks received\n"
Oct 10 09:22:00 useralerts2-prod docker[54239]: time="2016-10-10T09:22:00.633933428Z" level=info msg="2016/10/10 09:22:00 [INFO] memberlist: Suspect kafka1-prod has failed, no acks received\n"
Oct 10 09:22:06 useralerts2-prod docker[54239]: time="2016-10-10T09:22:06.031941901Z" level=info msg="2016/10/10 09:22:06 [INFO] serf: EventMemberFailed: kafka1-prod 192.168.0.22\n"
Oct 10 09:22:58 useralerts2-prod docker[54239]: time="2016-10-10T09:22:58.229124981Z" level=info msg="2016/10/10 09:22:58 [INFO] serf: EventMemberJoin: kafka1-prod 192.168.0.22\n"
Oct 10 21:16:32 useralerts2-prod docker[63894]: time="2016-10-10T21:16:32.667610826Z" level=info msg="2016/10/10 21:16:32 [INFO] serf: EventMemberJoin: kafka1-prod 192.168.0.22\n"
Oct 10 23:33:07 useralerts2-prod docker[1146]: time="2016-10-10T23:33:07.146403267Z" level=info msg="2016/10/10 23:33:07 [INFO] serf: EventMemberJoin: kafka1-prod 192.168.0.22\n"

I am running this command:

sudo journalctl -u docker.service | grep kafka1-prod

I see that the last message is:

Oct 10 23:33:07 useralerts2-prod docker[1146]: time="2016-10-10T23:33:07.146403267Z" level=info msg="2016/10/10 23:33:07 [INFO] serf: EventMemberJoin: kafka1-prod 192.168.0.22\n"

It looks like it is OK. From this log I would expect that containers running on node useralerts2-prod would be able to ping fine to containers running on node kafka1-prod.

@mrjana
Copy link
Contributor

mrjana commented Oct 11, 2016

@groyee Since you provided the logs only since today I just want to make sure the problem happened today. Was the container dockeruser_tasksmanager_1 successfully able to ping dockeruser_tasksmanager_1 today and then stopped working at some point today? if not can you get me the logs when it stopped working?

@groyee
Copy link
Author

groyee commented Oct 11, 2016

It happened 2 or 3 days ago (I believe) and since then this is the situation.

I can provide the full daemon logs since the last several days but this is going to be a very big log. Should I post it here?

@mrjana
Copy link
Contributor

mrjana commented Oct 11, 2016

You can do an attachment. Or you can post a link to a Gist

@groyee
Copy link
Author

groyee commented Oct 11, 2016

@mrjana
Copy link
Contributor

mrjana commented Oct 12, 2016

@groyee There are a lot of node flapping happening in the serf gossip cluster from what I can see from the logs. Is there a problem in the underlying network in terms of congestion? In general I see network congestion from the logs and it is affecting a number of different functions in docker which require a reliable network. Certain excerpts:

Gossip flapping

Oct 01 05:14:45 useralerts2-prod docker[6767]: time="2016-10-01T05:14:45.860209197Z" level=info msg="2016/10/01 05:14:45 [INFO] memberlist: Suspect batchprocessing1-prod has failed, no acks received\n"
Oct 01 05:24:16 useralerts2-prod docker[6767]: time="2016-10-01T05:24:16.860222137Z" level=info msg="2016/10/01 05:24:16 [INFO] memberlist: Suspect statsingest3-prod has failed, no acks received\n"
Oct 01 05:24:26 useralerts2-prod docker[6767]: time="2016-10-01T05:24:26.860835866Z" level=info msg="2016/10/01 05:24:26 [INFO] memberlist: Marking statsingest3-prod as failed, suspect timeout reached\n"
Oct 01 05:24:26 useralerts2-prod docker[6767]: time="2016-10-01T05:24:26.860921469Z" level=info msg="2016/10/01 05:24:26 [INFO] serf: EventMemberFailed: statsingest3-prod 192.168.0.38\n"
Oct 01 05:24:27 useralerts2-prod docker[6767]: time="2016-10-01T05:24:27.618217243Z" level=info msg="2016/10/01 05:24:27 [INFO] serf: EventMemberJoin: statsingest3-prod 192.168.0.38\n"
Oct 01 06:55:38 useralerts2-prod docker[6767]: time="2016-10-01T06:55:38.196556514Z" level=info msg="2016/10/01 06:55:38 [INFO] serf: EventMemberFailed: webapi1-prod 192.168.0.29\n"
Oct 01 06:55:38 useralerts2-prod docker[6767]: time="2016-10-01T06:55:38.616966622Z" level=info msg="2016/10/01 06:55:38 [INFO] serf: EventMemberJoin: webapi1-prod 192.168.0.29\n"
Oct 01 06:55:38 useralerts2-prod docker[6767]: time="2016-10-01T06:55:38.699653096Z" level=info msg="2016/10/01 06:55:38 [INFO] serf: EventMemberFailed: categorization9-prod 192.168.0.63\n"
Oct 01 06:55:39 useralerts2-prod docker[6767]: time="2016-10-01T06:55:39.024589486Z" level=info msg="2016/10/01 06:55:39 [INFO] serf: EventMemberFailed: useralerts1-prod 192.168.0.9\n"
Oct 01 06:55:40 useralerts2-prod docker[6767]: time="2016-10-01T06:55:40.053306431Z" level=info msg="2016/10/01 06:55:40 [INFO] serf: EventMemberJoin: useralerts1-prod 192.168.0.9\n"
Oct 01 06:55:48 useralerts2-prod docker[6767]: time="2016-10-01T06:55:48.055900301Z" level=info msg="2016/10/01 06:55:48 [INFO] serf: EventMemberJoin: categorization9-prod 192.168.0.63\n"

Node discovery

Oct 02 23:17:40 useralerts2-prod docker[6767]: time="2016-10-02T23:17:40.245038676Z" level=warning msg="Registering as \"192.168.0.25:2376\" in discovery failed: cannot set or renew session for ttl, unable to operate on sessions"
Oct 03 10:36:57 useralerts2-prod docker[6767]: time="2016-10-03T10:36:57.249617282Z" level=error msg="discovery error: Unexpected watch error"
Oct 03 10:37:00 useralerts2-prod docker[6767]: time="2016-10-03T10:37:00.045066417Z" level=warning msg="Registering as \"192.168.0.25:2376\" in discovery failed: cannot set or renew session for ttl, unable to operate on sessions"

Image Pull

Oct 02 23:17:40 useralerts2-prod docker[6767]: time="2016-10-02T23:17:40.245038676Z" level=warning msg="Registering as \"192.168.0.25:2376\" in discovery failed: cannot set or renew session for ttl, unable to operate on sessions"
Oct 03 10:36:57 useralerts2-prod docker[6767]: time="2016-10-03T10:36:57.249617282Z" level=error msg="discovery error: Unexpected watch error"
Oct 03 10:37:00 useralerts2-prod docker[6767]: time="2016-10-03T10:37:00.045066417Z" level=warning msg="Registering as \"192.168.0.25:2376\" in discovery failed: cannot set or renew session for ttl, unable to operate on sessions"

@groyee
Copy link
Author

groyee commented Oct 12, 2016

We are using the standard Azure network.

Pinging Azure internal IP always works. This issue happens only with the overlay network.

Also, I don't really understand how come this node can ping to any other container in the swarm cluster and how come restarting the container or the docker daemon or even the entire machine doesn't help.

@mrjana
Copy link
Contributor

mrjana commented Oct 12, 2016

can it ping any other container on the same node as other container which it failed to ping? It looks like there is some problem either with communicating with that node in port 7946/udp and 7946/tcp or 4789/udp. Can you try to use tools like nc to connect to these ports after the problem has happened? It seems like there is some thing blocking traffic on these ports. That might explain why things still don't work even after a node or container restart.

@mrjana
Copy link
Contributor

mrjana commented Oct 12, 2016

Also there seems to be a general problem with image pulls timing out which should not be using the overlay network. Not sure at the moment if they are related or not.

@groyee
Copy link
Author

groyee commented Oct 12, 2016

Any other container, running on any other node (except useralerts2-prod) can successfully ping to dockeruser_kafka_1 container (10.0.7.7).

Please let me know if you still want me to run some tests.

@groyee
Copy link
Author

groyee commented Oct 12, 2016

Regarding you question:

_

can it ping any other container on the same node as other container which it failed to ping?

_

I just checked it, and it can ping to other containers running on kafka1-prod

screen shot 2016-10-12 at 9 21 17 pm

10.0.7.7 and 10.0.7.31 are both running on node kafka1-prod.

And again, any container, on any other node can ping 10.0.7.7

Crazy :-)

@groyee
Copy link
Author

groyee commented Oct 12, 2016

If it was a one time issue or at least we had some workaround we could live with that for now. The problem is that it happens every day to a different container and there is no workaround. Currently what I do is delete the node from Azure and create a new one every time it happens.

@mrjana
Copy link
Contributor

mrjana commented Oct 12, 2016

@groyee I see that the gossip Query was building up from 2016/10/07 11:49:08 to 2016/10/07 12:02:52 and then you probably rebooted the node. The gossip query building up indicates a networking congestion during that time which it never recovered from. But this should atleast be cleared when you rebooted the node. After you rebooted the node there were no query queue buildup but that information alone may not mean much unless we know if there were any miss notifications. How may containers are running in your cluster overall? Can you still run nc kafka1-prod 7946 and nc -u kafka1-prod 7946? Also if possible can you try restart this node in debug mode and get the logs after restart? But I think all of these problems are because of the initial network congestion which lasted for 13 minutes.

@mrjana
Copy link
Contributor

mrjana commented Oct 12, 2016

You mentioned in your bug report that you have 200 containers. Is it spread across all the 100VMs? Also do you bring them down and up too often?

@sebi-hgdata
Copy link

@groyee can you try a reversed ping?(ping the container from the one that it cant reach) And see if it recovers? I think there are a couple of issues like this

@groyee
Copy link
Author

groyee commented Oct 12, 2016

We have about 200 containers spread across ~90 VMs.

I ran nc and nc -u to port 7946 and it works fine.

We do bring containers up and down based on the system load. Sort of auto scaling.

WOW. I just did reversed ping and it recovered!!! Can you please explain me what is going on here?

@groyee
Copy link
Author

groyee commented Oct 12, 2016

So, I guess the question is what now? Is it a docker issue? Is it a libnetwork issue? Is it something else?

I guess I could write some script that will run on each container and ping all other containers every few seconds but I don't think this is a good solution for production.

Should I drop the docker overlay network and use --net=host? I know it's probably a bad idea but if it at least be stable without disconnections then it can be a temporary solution.

@mrjana
Copy link
Contributor

mrjana commented Oct 12, 2016

WOW. I just did reversed ping and it recovered!!! Can you please explain me what is going on here?

Yeah that can be explained. What is happening in your case is let's say Container A is trying to ping Container B and failing, in your case the node running Container A does not know enough information about how to forward traffic to Container B. But it seems like Container B knows how to reach Container A since reverse ping is working. Once you send a ping from Container B to Container A, the node running Container A auto-learns how to reach Container B when it receives the packet from Container B so when it sends a response it knows exactly how to send the response.

@groyee
Copy link
Author

groyee commented Oct 12, 2016

What would you suggest regarding pulling images?

Also please see an issue I opened several weeks ago.
Docker Swarm ignores container constraints when performing the pull operation #2467

@thaJeztah
Copy link
Member

you can consider setting "max concurrent downloads" on the daemon https://docs.docker.com/engine/reference/commandline/dockerd/

(either by setting that flag, or using a daemon.json configuration file)

On 12 Oct 2016, at 16:45, Sasha Goldberg notifications@github.com wrote:

What would you suggest regarding pulling images?

Also please see an issue I opened several weeks ago.

docker-archive/classicswarm#2467


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.

@sebi-hgdata
Copy link

sebi-hgdata commented Oct 13, 2016

@groyee I've had these sorts of issues for a long time now, for me it happened also during deploys (we have just a dozen machines), that would generate high download traffic (did not think about this till now)... and might be related to Serf and UDP packet loses (just an assumption). What I've done to minimize the occurrences of this was to setup my own serf agent that joins docker's serf cluster and a cron job that does a serf reachability test. Also, thinking about it and reading through Serf's documentation doing a serf rtt would be better...

@groyee
Copy link
Author

groyee commented Oct 13, 2016

I see that the default value is --max-concurrent-downloads=3.

Do you think setting it to 1 will make the difference?

Also, when I do docker-compose pull on the swarm master, where from does it push the image to the nodes? From it's local machine? If so, maybe I should change the --max-concurrent-uploads on the swarm?

@thaJeztah
Copy link
Member

where from does it push the image to the nodes

In Swarm, each node pulls the image individually, so that option has to be set on each node / daemon. Swarm (nor Swarm mode) does not push images to the nodes

@groyee
Copy link
Author

groyee commented Oct 14, 2016

I see.

So it still means that if I have 100 VMs then all of them at once will start download a container and this can choke the network. I will try setting -max-concurrent-downloads=1 but I am doubtful it will change anything.

@thaJeztah
Copy link
Member

So it still means that if I have 100 VMs then all of them at once will start download a container

For "classic" Swarm, yes. Swarm mode does rolling updates, so you can specify how many nodes / service instances should update in parallel.

@dongluochen
Copy link
Contributor

So it still means that if I have 100 VMs then all of them at once will start download a container

docker pull command to "classic" Swarm would trigger download on all the nodes. Is it necessary to call docker pull? If you organizes your script to issue docker run, each run command would pull the image if not exists on the node, then run the container. Would that reduce simultaneous pulls to avoid congestion?

@groyee
Copy link
Author

groyee commented Oct 15, 2016

The reason we are doing pull is because there is a docker defect (or at least was in the previous versions) that sometimes it would give you the following error:

ERROR: for dockeruser_webapi_2 Cannot create container for service webapi: Unable to find a node that satisfies the following conditions

Only after doing pull this issue was resolved.

@groyee
Copy link
Author

groyee commented Oct 15, 2016

So I think I can confirm now that this issue has nothing to do with image pulls. For the last several days we didn't do even one pull, no network spikes and this issue still happens.

Currently we have 1 container that every time I restart it I need to do reverse ping from hosts where it cannot ping. After the reverse ping it works but then when I restart the container again I need to do the same operation again. Just to be on the safe side, I tried again to reboot the VM but it doesn't help.

Please let me know what logs you need. We really need to fix this issue.

@mrjana
Copy link
Contributor

mrjana commented Oct 15, 2016

@groyee Sorry been busy with other issues. When you restarted the container in that VM did you have the daemon in debug mode. Can you get the daemon logs from that node after you tried a few unsuccessful ping and after some time so that we can a) see that miss notifications were generated b) it was queried on the cluster but somehow timed out?

@groyee
Copy link
Author

groyee commented Oct 15, 2016

No, unfortunately it wasn't.

I can do it again as it happens every time. Is there a permanent way to boot docker in debug mode in that host?

@thaJeztah
Copy link
Member

you can use a daemon.json configuration file, and enable debug in that https://docs.docker.com/engine/reference/commandline/dockerd/

On 15 Oct 2016, at 13:31, Sasha Goldberg notifications@github.com wrote:

No, unfortunately it wasn't.

I can do it again as it happens every time. Is there a permanent way to boot docker in debug node in that host?


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.

@groyee
Copy link
Author

groyee commented Oct 16, 2016

Thanks!

So I think I found some interesting logs.

First, I attached the docker daemon debug log. (I hope this is debug. I changed /lib/systemd/system/docker.service file to be: ExecStart=/usr/bin/dockerd -D -H fd://)

After I restarted the docker daemon and the container, this container couldn't ping to two containers. One of them is Kafka and the other one is zookeeper.

I looked at the logs in live to see if there is some new message when I do the reversed ping, but I didn't see anything in the docker logs. However, when I did tail -f /var/log/syslog. The moment I did reverse ping from the other hosts I saw these two lines:

Oct 16 16:39:19 webapi1-prod kernel: [89667.035914] vxlan1: 02:42:0a:00:07:07 migrated from 192.168.0.23 to 192.168.0.22
Oct 16 16:39:35 webapi1-prod kernel: [89682.861612] vxlan1: 02:42:0a:00:07:06 migrated from 192.168.0.22 to 192.168.0.23

The ip you see here is the Azure internal ip of the nodes. One of them is where zookeeper is running and the other is where kafka is running.
docker.debug.txt

@mrjana
Copy link
Contributor

mrjana commented Oct 16, 2016

@groyee I think I see what is going on. For confirming my theory can you tell me if kafka and zookeeper containers were restarted after they were initially up and running and that restart actually resulted in them being scheduled in different hosts like say (kafka was running in 192.168.0.23 and after a restart ran in 192.168.0.22 and zookeeper probably was originally running on 192.168.0.22 and migrated to 192.168.0.23)? Do you see this problem even if none of your containers are restarted after starting them in a fresh cluster?

@groyee
Copy link
Author

groyee commented Oct 16, 2016

What you said is correct. Both kafka and zookeeper were restarted and swarm scheduled them on different hosts after the restart.

That being said, other containers have no problems with that. For example, if I bring a new container to the cluster now, just a simple ubuntu image, it pings fine to everybody.

I am not sure I understand your last question. I think this happens only when container restarts, either by me or by itself.

@dongluochen
Copy link
Contributor

Both kafka and zookeeper were restarted and swarm scheduled them on different hosts after the restart.

@groyee do you mean the containers are configured with -l 'com.docker.swarm.reschedule-policies=["on-node-failure"]' so that they are automatically rescheduled at node failure?

@groyee
Copy link
Author

groyee commented Oct 18, 2016

No, sorry, my wording wasn't accurate.

We don't use on-node-failure feature.

When the container restarts by itself it always restarts it on the same host.

When we do it manually: docker-compose scale=0 and then docker-compose scale=X then swarm can schedule it where it wants.

@dongluochen
Copy link
Contributor

@groyee There are quite a bit of errors in docker.debug.txt. I can't tell if they are directly related. One thing I notice is docker shutdown was not clean. Is this node run out of space?

Oct 16 16:18:19 webapi1-prod docker[857]: time="2016-10-16T16:18:19.600008310Z" level=error msg="Error deleting sandbox id 0d1479f28e5680cd532bcd314bc4355e0927e62eac0dcd43a3be213b73972707 for container 97284c0b1426d2563721811f625b71345cfdd0a7021f3c54ff3f81287b3e761c: could not cleanup all the endpoints in container 97284c0b1426d2563721811f625b71345cfdd0a7021f3c54ff3f81287b3e761c / sandbox 0d1479f28e5680cd532bcd314bc4355e0927e62eac0dcd43a3be213b73972707"
Oct 16 16:18:19 webapi1-prod docker[857]: time="2016-10-16T16:18:19.656021848Z" level=error msg="libcontainerd: backend.StateChanged(): write /var/lib/docker/containers/97284c0b1426d2563721811f625b71345cfdd0a7021f3c54ff3f81287b3e761c/.tmp-config.v2.json547004052: no space left on device"

@groyee
Copy link
Author

groyee commented Oct 19, 2016

Yes, this host ran out of space because of this defect: #21925

But I don't think it is related. Right now it has plenty of space. All I need to do is restart the container.

I attached new debug logs since the last docker daemon restart.

docker-debug-2.logs.txt

It's like docker keeps somewhere a cache of the containers addresses and then from some reason it fails to renew the cache. This is the only explanation I can think of that can explain why restarting container or this VM doesn't help but creating a new VM and then running the same container works fine (until the next time it happens).

Here are few line logs from the log file that look suspicious to me:

Oct 19 02:44:44 webapi2-prod docker[19538]: time="2016-10-19T02:44:44.676200046Z" level=error msg="Could not open netlink handle during vni population for ns /var/run/docker/netns/3-d025aa804d: failed to set into network namespace 13 while creating netlink socket: invalid argument"
Oct 19 02:44:44 webapi2-prod docker[19538]: time="2016-10-19T02:44:44.691345201Z" level=warning msg="Failure during overlay endpoints restore: restore network sandbox failed: could not get network sandbox (oper true): failed to create a netlink handle: failed to set into network namespace 13 while creating netlink socket: invalid argument"
Oct 19 02:44:44 webapi2-prod docker[19538]: time="2016-10-19T02:44:44.691369601Z" level=info msg="resetting init error and once variable for network d025aa804d79cc3d6919c30e3488e838e85b75c5a41b67a9c367a8871370d9d1 after unsuccesful endpoint restore: could not get network sandbox (oper true): failed to create a netlink handle: failed to set into network namespace 13 while creating netlink socket: invalid argument"
Oct 19 02:44:44 webapi2-prod docker[19538]: time="2016-10-19T02:44:44.748025228Z" level=error msg="getNetworkFromStore for nid 3f13a814d889aa7d4c4822d6f8a51b1d25da89882cfb8ff09657ce2f84a5b2d6 failed while trying to build sandbox for cleanup: network 3f13a814d889aa7d4c4822d6f8a51b1d25da89882cfb8ff09657ce2f84a5b2d6 not found"
Oct 19 02:44:44 webapi2-prod docker[19538]: time="2016-10-19T02:44:44.751736115Z" level=error msg="getEndpointFromStore for eid 2cf18e33f6fae267a592824574a2b1e424e98f0f03f24a6b20b3f8912fd19222 failed while trying to build sandbox for cleanup: could not find endpoint 2cf18e33f6fae267a592824574a2b1e424e98f0f03f24a6b20b3f8912fd19222: []"
Oct 19 02:44:44 webapi2-prod docker[19538]: time="2016-10-19T02:44:44.755032893Z" level=error msg="getEndpointFromStore for eid 5dcfaf06c229ece4fea2b478a7afda546fd5e50daa96edb0dc88a9d1703b13ee failed while trying to build sandbox for cleanup: could not find endpoint 5dcfaf06c229ece4fea2b478a7afda546fd5e50daa96edb0dc88a9d1703b13ee: []"
Oct 19 02:44:44 webapi2-prod docker[19538]: time="2016-10-19T02:44:44.756921237Z" level=error msg="getEndpointFromStore for eid e2619eb1a61ffa01aca8ebf4e3f567507bfa8061ee3a6af7eb15774b6375012b failed while trying to build sandbox for cleanup: could not find endpoint e2619eb1a61ffa01aca8ebf4e3f567507bfa8061ee3a6af7eb15774b6375012b: []"
Oct 19 02:44:44 webapi2-prod docker[19538]: time="2016-10-19T02:44:44.756946037Z" level=info msg="Removing stale sandbox e41c3862b3931675708cfdfc54cec91eadc2aa3fe3c4a7668c30317344f4747d (4cc06817cfe7f90ab3bd18301e02e842702c1036311f8d524d23d36b9ecb4014)"
Oct 19 02:44:44 webapi2-prod docker[19538]: time="2016-10-19T02:44:44.758286069Z" level=warning msg="Failed getting network for ep b40b7063229d3f15dee4e611ed1e7353ccebfb7a86ac98d999f7aeeef59198bf during sandbox e41c3862b3931675708cfdfc54cec91eadc2aa3fe3c4a7668c30317344f4747d delete: network 3f13a814d889aa7d4c4822d6f8a51b1d25da89882cfb8ff09657ce2f84a5b2d6 not found"

@dongluochen
Copy link
Contributor

dongluochen commented Oct 19, 2016

@groyee In the failure case, does the container move to a different host?

@sanimej Could this problem be related to issue #25215? I see the following error.

resetting init error and once variable for network d025aa804d79cc3d6919c30e3488e838e85b75c5a41b67a9c367a8871370d9d1 after unsuccesful endpoint restore: could not get network sandbox (oper true): failed to create a netlink handle: failed to set into network namespace 13 while creating netlink socket: invalid argument

@groyee
Copy link
Author

groyee commented Oct 19, 2016

Since we don't use on-node-failure feature I believe that in the failure case it doesn't move to a different host. I assume...

But again, it can easily be moved to a different host when we do docker-compose scale=0 and then docker-compose scale=X

For example. Zookeeper doesn't do much and it has no persistent volume so swarm can schedule it where it wants in the cluster every time we remove the container and install a new one.

Your second question to @sanimej is interesting. We had many many issues related to #25215. I upgraded all our servers to v1.12.2 but I can't say if the problem started before or after the upgrade. It could very well be that the problem started before I upgraded.

@dongluochen
Copy link
Contributor

There might be 2 ways a container moves to a host with different IP.

  1. The container with the same name was started in a new host, like the case with compose scale down then up.
  2. The host was restarted and got a different IP thru DHCP. In your failure cases, did the host shutdown and restart?

@groyee
Copy link
Author

groyee commented Oct 20, 2016

I can't say for sure

@mavenugo mavenugo assigned mavenugo and unassigned mrjana Nov 28, 2016
@mavenugo
Copy link
Contributor

@groyee We introduced a concept called --attachable overlay network in docker swarm-mode in 1.13 (1.13.0-rc2 is available for trying). We have fundamentally changed the way the gossip works and it might work better in your case. With --attachable network, one can also run containers using docker run in the overlay network. If you prefer to continue using swarm-v1 and compose, you can do that after creating the --attachable network in swarm-mode cluster. Would you be willing to give this a try and provide feedback ?

@groyee
Copy link
Author

groyee commented Dec 5, 2016

We would love to give it a try, I am just trying to understand if we can do it without any downtime.

Does it mean that I need to delete the current overlay network and create a new one?

Also, it means that I need to upgrade to 1.13 not only on the failing hosts but also on the swarm itself, right?

@thaJeztah
Copy link
Member

Let me close this ticket for now, as it looks like it went stale.

@thaJeztah thaJeztah closed this as not planned Won't fix, can't repro, duplicate, stale Sep 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

7 participants