Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VolumeDriver.Mount: exit status 1 #34

Open
jmkgreen opened this issue Jan 11, 2018 · 12 comments
Open

VolumeDriver.Mount: exit status 1 #34

jmkgreen opened this issue Jan 11, 2018 · 12 comments

Comments

@jmkgreen
Copy link

  • Plugin version (or commit ref) : latest
  • Docker version : 17.09.1
  • Plugin type : managed
  • Operating system: Ubuntu 16.04

Description

Six node swarm. Three are Swarm managers / Gluster servers, the other three are workers. They can all resolve each other and ping each other, all behind the same network switch.

We currently have the original glusterfs volume plugin installed on the workers. We want to migrate to your plugin so I have executed sudo docker plugin install sapk/plugin-gluster on each worker.

I created a new Gluster volume then a new docker volume using it and your plugin without incident.

I tried launching a stack using a docker-compose.yaml file:

version: '3.3'

services:

  test-server:
    image: ubuntu:latest
    hostname: ubuntu
    volumes:
      - jg-test_test:/data
    deploy:
      placement:
        constraints:
          - node.role != manager

This "landed" on virt-b, logs below. An earlier attempt, based on a top-level volumes definition and before using docker volume create ... also failed with the exact same error.

In short, neither by creating the docker volume in advance nor allowing docker swarm to auto-create it worked for me. The worker can ping the gluster declared node(s) just fine.

It is unclear how to debug this. I tried adding a debug: 1 to the volume definition without change observed.

Could the fact that these workers already use Gluster volumes via the original glusterfs plugin be preventing your plugin's use? At a loss to explain otherwise since we have it working with the origin plugin fine.

Logs

Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52.069898516Z" level=error msg="fatal task error" error="starting container failed: error while mounting volume '/var/lib/docker/plugins/ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974/rootfs': VolumeDriver.Mount: exit status 1" module="node/agent/taskmanager" node.id=t72pzcfvekpi7zayyi1su185y service.id=cshb1o1btdblbjwko0xup5fno task.id=svlw41f8r26bck88233qxbdnb
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52Z" level=info msg="2018/01/11 16:03:52 Entering go-plugins-helpers capabilitiesPath" plugin=ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52Z" level=info msg="2018/01/11 16:03:52 Entering go-plugins-helpers capabilitiesPath" plugin=ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52Z" level=info msg="2018/01/11 16:03:52 Entering go-plugins-helpers getPath" plugin=ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974
Jan 11 16:03:52 virt-b kernel: [1354011.732285] aufs au_opts_verify:1597:dockerd[4919]: dirperm1 breaks the protection by the permission bits on the lower branch
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52Z" level=info msg="2018/01/11 16:03:52 Entering go-plugins-helpers capabilitiesPath" plugin=ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52Z" level=info msg="2018/01/11 16:03:52 Entering go-plugins-helpers capabilitiesPath" plugin=ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974
Jan 11 16:03:52 virt-b dockerd[1051]: time="2018-01-11T16:03:52Z" level=info msg="2018/01/11 16:03:52 Entering go-plugins-helpers getPath" plugin=ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974
Jan 11 16:03:52 virt-b kernel: [1354011.759236] aufs au_opts_verify:1597:dockerd[4919]: dirperm1 breaks the protection by the permission bits on the lower branch

@sapk
Copy link
Owner

sapk commented Jan 11, 2018

I haven't tested running it in parallel of original plugin but it should work.
Make sure that the gluster server name are resolvable from host.

Can you test basic command ? like :

docker volume create --driver sapk/plugin-gluster --opt voluri="<volumeserver>:<volumename>" --name test
docker run -v test:/mnt --rm -ti ubuntu```

@sapk
Copy link
Owner

sapk commented Jan 11, 2018

For debug, It is plugin wide config : https://github.com/sapk/docker-volume-gluster#additionnal-docker-plugin-config

@jmkgreen
Copy link
Author

From the host that created the docker volume:

jamesg@svc1:~/jg-test$ sudo docker run -v jg-test_test:/data --rm -it ubuntu
root@153b83be0a4e:/# ls -l /data
total 0
root@153b83be0a4e:/# touch /data/foo
root@153b83be0a4e:/#

At this point I can see foo inside a brick.

On a worker node:

jamesg@virt-b:~$ sudo docker run -v jg-test_test:/data --rm -it ubuntu
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
Digest: sha256:fbaf303d18563e57a3c1a0005356ad102509b60884f3aa89ef9a90c0ea5d1212
Status: Downloaded newer image for ubuntu:latest
docker: Error response from daemon: error while mounting volume '/var/lib/docker/plugins/ff58396a87804b347745e384c8021758fdb97d41d398767c7bf84fb3fabd1974/rootfs': VolumeDriver.Mount: exit status 1.

@jmkgreen
Copy link
Author

Those plugin instructions are per docker engine? I.e. I would have to run them on each worker?

@sapk
Copy link
Owner

sapk commented Jan 11, 2018

Unfortunately, yes.
I don't know any methods to config (even install) plugin through swarm ...

@jmkgreen
Copy link
Author

Having launched the container on the manager node where I created the volume, and failed to launch the same on a worker node...

I have set debug (I had to force disable your plugin - Docker thinks it is in use on all workers).

I have re-established the volumes top level key. The docker-compose.yaml now reads:

version: '3.3'

services:

  test-server:
    image: ubuntu:latest
    hostname: ubuntu
    volumes:
      - test:/data
    deploy:
      placement:
        constraints:
          - node.role != manager

volumes:
  test:
    driver: sapk/plugin-gluster:latest
    driver_opts:
       voluri: "svc1:jg-test_test"

Launched the stack and the error now shows:

jamesg@svc1:~/jg-test$ docker stack ps --format "{{.Name}} {{.Node}} {{.Error}}" --no-trunc jg-test
jg-test_test-server.1 virt-b 
jg-test_test-server.1 virt-c "starting container failed: error while mounting volume '/var/lib/docker/plugins/d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0/rootfs': VolumeDriver.Mount: EOF
"
jg-test_test-server.1 virt-c "starting container failed: error while mounting volume '/var/lib/docker/plugins/d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0/rootfs': VolumeDriver.Mount: EOF
"

In more detail on the worker:

Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="2018/01/11 17:28:08 Entering go-plugins-helpers mountPath" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="2018/01/11 17:28:08 http: panic serving @: runtime error: index out of range" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="goroutine 46 [running]:" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="net/http.(*conn).serve.func1(0xc4201f8140)" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="#011/usr/local/go/src/net/http/server.go:1697 +0xd0" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="panic(0x82cc40, 0xb0eb20)" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="#011/usr/local/go/src/runtime/panic.go:491 +0x283" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="github.com/sapk/docker-volume-gluster/gluster/driver.parseVolURI(0x0, 0x0, 0x0, 0xc)" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="#011/go/src/app/.gopath/src/github.com/sapk/docker-volume-gluster/gluster/driver/tools.go:74 +0x1dd" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="github.com/sapk/docker-volume-gluster/gluster/driver.(*GlusterDriver).Mount(0xc42008e7d0, 0xc4200f14e0, 0x0, 0x0, 0x0)" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="#011/go/src/app/.gopath/src/github.com/sapk/docker-volume-gluster/gluster/driver/driver.go:239 +0x1b9" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="github.com/sapk/docker-volume-gluster/vendor/github.com/docker/go-plugins-helpers/volume.(*Handler).initMux.func3(0xae1ac0, 0xc42015ae00, 0xc420111500)" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="#011/go/src/app/.gopath/src/github.com/sapk/docker-volume-gluster/vendor/github.com/docker/go-plugins-helpers/volume/api.go:166 +0xf8" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08.434272597Z" level=warning msg="Unable to connect to plugin: /run/docker/plugins/d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0/gluster.sock/VolumeDriver.Mount: Post http://%2Frun%2Fdocker%2Fplugins%2Fd1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0%2Fgluster.sock/VolumeDriver.Mount: EOF, retrying in 1s"
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="net/http.HandlerFunc.ServeHTTP(0xc420051420, 0xae1ac0, 0xc42015ae00, 0xc420111500)" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="#011/usr/local/go/src/net/http/server.go:1918 +0x44" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="net/http.(*ServeMux).ServeHTTP(0xc420087b90, 0xae1ac0, 0xc42015ae00, 0xc420111500)" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="#011/usr/local/go/src/net/http/server.go:2254 +0x130" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="net/http.serverHandler.ServeHTTP(0xc420097790, 0xae1ac0, 0xc42015ae00, 0xc420111500)" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="#011/usr/local/go/src/net/http/server.go:2619 +0xb4" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="net/http.(*conn).serve(0xc4201f8140, 0xae21c0, 0xc420192a40)" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="#011/usr/local/go/src/net/http/server.go:1801 +0x71d" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="created by net/http.(*Server).Serve" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0
Jan 11 17:28:08 virt-c dockerd[1050]: time="2018-01-11T17:28:08Z" level=info msg="#011/usr/local/go/src/net/http/server.go:2720 +0x288" plugin=d1db1e403432309fedd643f01a6ad7cb523f915a6b17b899a1ebe8fac652ace0

@sapk
Copy link
Owner

sapk commented Jan 12, 2018

Ok it seems to panic at https://github.com/sapk/docker-volume-gluster/blob/master/gluster/driver/tools.go#L74. At first glance, I don't know why since volume uri seems good.
Since the error is "runtime error: index out of range", it must be volParts[1] that is not populated.

@jmkgreen
Copy link
Author

Hi - have you had any further thoughts on this? Keen to get back to testing things internally ASAP.

@sakel
Copy link

sakel commented Feb 5, 2018

I have the same issue. 3-node (all managers) swarm cluster. It doesn't seem to work with docker run either.

Any way I could help you debug this issue?

@romguyon
Copy link

romguyon commented Apr 5, 2018

For the plugin with docker-machine and the driver virtualbox it is important to allocate a memory higher than 1GB of RAM.

docker-machine create -d virtualbox --virtualbox memory 2048 for example

Glusterfs specifies in its doc that for vms the ram must be higher tahn 1GB

Then you can use the plugin with compose or with docker run and it works.

@codedge
Copy link

codedge commented Oct 9, 2019

I can confirm the error.

  • Docker version: 19.03.2
  • CentOS 7

$ gluster volume info

Volume Name: lei01
Type: Replicate
Volume ID: 34f83239-5748-4d39-ad48-fa59d5c6c0dc
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: dh1.lei01:/export/lei01
Brick2: dh2.lei01:/export/lei01
Options Reconfigured:
auth.allow: *
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

$ docker volume inspect test_gfs

[
    {
        "CreatedAt": "0001-01-01T00:00:00Z",
        "Driver": "glusterfs:latest",
        "Labels": {
            "com.docker.stack.namespace": "test"
        },
        "Mountpoint": "/var/lib/docker-volumes/gluster/test_gfs",
        "Name": "test_gfs",
        "Options": {
            "voluri": "dh1.lei01,dh2.lei01:lei01"
        },
        "Scope": "global",
        "Status": {
            "TODO": "List"
        }
    }
]

Then using the newly created volume:

$ docker run -v test_gfs:/data --rm -ti alpine sh
docker: Error response from daemon: error while mounting volume '/var/lib/docker/plugins/c546ed4c7d978dfc03dbda98a451bbd17d5713e467817999feb7956b6585f296/rootfs': VolumeDriver.Mount: exit status 107.

Logs

Oct 09 13:08:17 dh2.lei01 dockerd[12726]: time="2019-10-09T13:08:17+02:00" level=error msg="time=\"2019-10-09T11:08:17Z\" level=debug msg=\"Entering Mount: &{test_gfs 74b0ea8cd8e6db56c2bc94807914996dea0d61cafece90f9cc7a314cda0542de}\"" plugin=c546ed4c7d978dfc03dbda98a451bbd17d5713e467817999feb7956b6585f296
Oct 09 13:08:17 dh2.lei01 dockerd[12726]: time="2019-10-09T13:08:17+02:00" level=error msg="time=\"2019-10-09T11:08:17Z\" level=debug msg=\"Entering MountExist: name: test_gfs\"" plugin=c546ed4c7d978dfc03dbda98a451bbd17d5713e467817999feb7956b6585f296
Oct 09 13:08:17 dh2.lei01 dockerd[12726]: time="2019-10-09T13:08:17+02:00" level=error msg="time=\"2019-10-09T11:08:17Z\" level=debug msg=\"Volume found: &{dh1.lei01,dh2.lei01:lei01 test_gfs %!s(int=0)}\"" plugin=c546ed4c7d978dfc03dbda98a451bbd17d5713e467817999feb7956b6585f296
Oct 09 13:08:17 dh2.lei01 dockerd[12726]: time="2019-10-09T13:08:17+02:00" level=error msg="time=\"2019-10-09T11:08:17Z\" level=debug msg=\"Mount found: &{/var/lib/docker-volumes/gluster/test_gfs %!s(int=0)}\"" plugin=c546ed4c7d978dfc03dbda98a451bbd17d5713e467817999feb7956b6585f296
Oct 09 13:08:17 dh2.lei01 dockerd[12726]: time="2019-10-09T13:08:17+02:00" level=error msg="time=\"2019-10-09T11:08:17Z\" level=debug msg=\"glusterfs --volfile-id='lei01' -s 'dh1.lei01' -s 'dh2.lei01' /var/lib/docker-volumes/gluster/test_gfs\"" plugin=c546ed4c7d978dfc03dbda98a451bbd17d5713e467817999feb7956b6585f296
Oct 09 13:08:21 dh2.lei01 dockerd[12726]: time="2019-10-09T13:08:21+02:00" level=error msg="time=\"2019-10-09T11:08:21Z\" level=debug msg=\"Error: exit status 107\"" plugin=c546ed4c7d978dfc03dbda98a451bbd17d5713e467817999feb7956b6585f296
Oct 09 13:08:21 dh2.lei01 dockerd[12726]: time="2019-10-09T13:08:21+02:00" level=error msg="time=\"2019-10-09T11:08:21Z\" level=debug msg=\"Output: []\"" plugin=c546ed4c7d978dfc03dbda98a451bbd17d5713e467817999feb7956b6585f296
Oct 09 13:08:21 dh2.lei01 dockerd[12726]: time="2019-10-09T13:08:21.722560405+02:00" level=error msg="fatal task error" error="VolumeDriver.Mount: exit status 107" module=node/agent/taskmanager node.id=ylm8yp3hh1nkst3e22wvpwqhr service.id=bwcc8wbihjeomo5wkrq35871r task.id=pquyz9e77hbp6z9u5snw44zjz

Mounting the volume with glusterfs --volfile-id='lei01' -s 'dh1.lei01' -s 'dh2.lei01' /mnt/test works without errors.

UPDATE 1
What I recognized is that the mountpoint of the the volume

 "Mountpoint": "/var/lib/docker-volumes/gluster/test_gfs",

does not exist. There is no docker-volumes path in /var/lib/. But even when I create it, it cannot be mounted and the error VolumeDriver.Mount: exit status 107 shows up.

Any idea what could help or is more information needed?

@akshayvijapur
Copy link

Me Too? Any workaround for this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants