Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"can't create" but creates anyway #52

Open
ghost opened this issue Apr 18, 2020 · 4 comments
Open

"can't create" but creates anyway #52

ghost opened this issue Apr 18, 2020 · 4 comments

Comments

@ghost
Copy link

ghost commented Apr 18, 2020

lo, having a little trouble getting docker to use this for volumes. Ive got a cluster that seems to be working, plus this container:

docker-rbd-plugin:                                                                                                                                                                              
    restart: unless-stopped                                                                                                                                                                       
    hostname: docker-rbd-plugin.ceph-cluster.internal                                                                                                                                             
    # build:                                                                                                                                                                                      
    #   context: ./docker-rbd-plugin/                                                                                                                                                             
    #   dockerfile: Dockerfile                                                                                                                                                                    
    privileged: true                                                                                                                                                                              
    image: openfrontier/rbd-docker-plugin                                                                                                                                                         
    command: "rbd-docker-plugin --create --remove delete --name ceph --cluster ceph-cluster.internal --pool docker --user docker"                                                                 
    environment:                                                                                                                                                                                  
      RBD_DOCKER_PLUGIN_DEBUG: 1                                                                                                                                                                  
      LANG: en_US.utf8                                                                                                                                                                            
      TZ: UTC                                                                                                                                                                                     
    volumes:                                                                                                                                                                                      
      - /run/docker/plugins:/run/docker/plugins                                                                                                                                                   
      - /mnt/ceph/ceph:/etc/ceph                                                                                                                                                                  
      - /mnt/ceph/lib/:/var/lib/ceph/                                                                                                                                                             
      - /var/lib/docker/volumes:/var/lib/docker-volumes                                                                                                                                           
    depends_on:                                                                                                                                                                                   
      - osd1                                                                                                                                                                                      
      - mon1                                                                                                                                                                                      
      - mgr                                                                                                                                                                                       
    networks:                                                                                                                                                                                     
      cluster-net:               

I created the docker pool and user etc like this:

docker-compose exec mon1 ceph osd pool create docker 50                                                                                                                                           
                                                                                                                                                                                                  
docker-compose exec mon1 ceph auth get-or-create client.docker mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=docker' -o /etc/ceph/ceph.client.docker.keyring     
                                                                                                                                                                                                  
docker-compose exec mon1 ceph osd pool application enable rbd docker     

and then I start the container and attempt to create a container with a volume using the 'ceph' driver docker run -it --volume-driver ceph -v test ubuntu /bin/bash

and then it gives me this message:

docker: Error response from daemon: create 53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7: VolumeDriver.Create: Unable to create Ceph RBD Image(53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7): exit status 2.

and the plugin container says this:

Starting ceph_docker-rbd-plugin_1 ... done
Attaching to ceph_docker-rbd-plugin_1
docker-rbd-plugin_1  | 2020/04/18 18:14:46 main.go:91: INFO: starting rbd-docker-plugin version 2.0.1
docker-rbd-plugin_1  | 2020/04/18 18:14:46 main.go:92: INFO: canCreateVolumes=true, removeAction="delete"
docker-rbd-plugin_1  | 2020/04/18 18:14:46 main.go:101: INFO: Setting up Ceph Driver for PluginID=ceph, cluster=ceph-cluster.internal, ceph-user=docker, pool=docker, mount=/var/lib/docker-volumes, config=/etc/ceph/ceph.conf
docker-rbd-plugin_1  | 2020/04/18 18:14:46 driver.go:85: INFO: newCephRBDVolumeDriver: setting base mount dir=/var/lib/docker-volumes/ceph
docker-rbd-plugin_1  | 2020/04/18 18:14:46 main.go:121: INFO: Creating Docker VolumeDriver Handler
docker-rbd-plugin_1  | 2020/04/18 18:14:46 main.go:125: INFO: Opening Socket for Docker to connect: /run/docker/plugins/ceph.sock
docker-rbd-plugin_1  | 2020/04/18 18:14:51 api.go:188: Entering go-plugins-helpers getPath
docker-rbd-plugin_1  | 2020/04/18 18:14:51 driver.go:644: DEBUG: parseImagePoolNameSize: "53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7": ["53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7" "" "" "53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7" "" ""]
docker-rbd-plugin_1  | 2020/04/18 18:14:51 utils.go:73: DEBUG: shWithTimeout: 2m0s, rbd, [--pool docker --conf /etc/ceph/ceph.conf --id docker info 53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7]
docker-rbd-plugin_1  | 2020/04/18 18:14:51 utils.go:38: DEBUG: sh CMD: &{"/usr/bin/rbd" ["rbd" "--pool" "docker" "--conf" "/etc/ceph/ceph.conf" "--id" "docker" "info" "53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7"] [] "" <nil> <nil> <nil> [] %!q(*syscall.SysProcAttr=<nil>) %!q(*os.Process=<nil>) "<nil>" <nil> %!q(bool=false) [] [] [] [] %!q(chan error=<nil>)}
docker-rbd-plugin_1  | 2020/04/18 18:14:52 driver.go:467: WARN: Image 53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7 does not exist
docker-rbd-plugin_1  | 2020/04/18 18:14:52 api.go:132: Entering go-plugins-helpers createPath
docker-rbd-plugin_1  | 2020/04/18 18:14:52 driver.go:145: INFO: API Create(&{"53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7" map[]})
docker-rbd-plugin_1  | 2020/04/18 18:14:52 driver.go:153: INFO: createImage(&{"53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7" map[]})
docker-rbd-plugin_1  | 2020/04/18 18:14:52 driver.go:644: DEBUG: parseImagePoolNameSize: "53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7": ["53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7" "" "" "53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7" "" ""]
docker-rbd-plugin_1  | 2020/04/18 18:14:52 utils.go:73: DEBUG: shWithTimeout: 2m0s, rbd, [--pool docker --conf /etc/ceph/ceph.conf --id docker info 53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7]
docker-rbd-plugin_1  | 2020/04/18 18:14:52 utils.go:38: DEBUG: sh CMD: &{"/usr/bin/rbd" ["rbd" "--pool" "docker" "--conf" "/etc/ceph/ceph.conf" "--id" "docker" "info" "53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7"] [] "" <nil> <nil> <nil> [] %!q(*syscall.SysProcAttr=<nil>) %!q(*os.Process=<nil>) "<nil>" <nil> %!q(bool=false) [] [] [] [] %!q(chan error=<nil>)}
docker-rbd-plugin_1  | 2020/04/18 18:14:52 driver.go:687: INFO: Attempting to create new RBD Image: (docker/53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7, %!s(int=20480), xfs)
docker-rbd-plugin_1  | 2020/04/18 18:14:52 utils.go:73: DEBUG: shWithTimeout: 2m0s, rbd, [--pool docker --conf /etc/ceph/ceph.conf --id docker create --image-format 2 --size 20480 53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7]
docker-rbd-plugin_1  | 2020/04/18 18:14:52 utils.go:38: DEBUG: sh CMD: &{"/usr/bin/rbd" ["rbd" "--pool" "docker" "--conf" "/etc/ceph/ceph.conf" "--id" "docker" "create" "--image-format" "2" "--size" "20480" "53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7"] [] "" <nil> <nil> <nil> [] %!q(*syscall.SysProcAttr=<nil>) %!q(*os.Process=<nil>) "<nil>" <nil> %!q(bool=false) [] [] [] [] %!q(chan error=<nil>)}
docker-rbd-plugin_1  | 2020/04/18 18:14:52 utils.go:73: DEBUG: shWithTimeout: 2m0s, rbd, [--pool docker --conf /etc/ceph/ceph.conf --id docker lock add 53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7 docker-rbd-plugin.ceph-cluster.internal]
docker-rbd-plugin_1  | 2020/04/18 18:14:52 utils.go:38: DEBUG: sh CMD: &{"/usr/bin/rbd" ["rbd" "--pool" "docker" "--conf" "/etc/ceph/ceph.conf" "--id" "docker" "lock" "add" "53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7" "docker-rbd-plugin.ceph-cluster.internal"] [] "" <nil> <nil> <nil> [] %!q(*syscall.SysProcAttr=<nil>) %!q(*os.Process=<nil>) "<nil>" <nil> %!q(bool=false) [] [] [] [] %!q(chan error=<nil>)}
docker-rbd-plugin_1  | 2020/04/18 18:14:52 utils.go:73: DEBUG: shWithTimeout: 2m0s, rbd, [--pool docker --conf /etc/ceph/ceph.conf --id docker map 53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7]
docker-rbd-plugin_1  | 2020/04/18 18:14:52 utils.go:38: DEBUG: sh CMD: &{"/usr/bin/rbd" ["rbd" "--pool" "docker" "--conf" "/etc/ceph/ceph.conf" "--id" "docker" "map" "53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7"] [] "" <nil> <nil> <nil> [] %!q(*syscall.SysProcAttr=<nil>) %!q(*os.Process=<nil>) "<nil>" <nil> %!q(bool=false) [] [] [] [] %!q(chan error=<nil>)}
docker-rbd-plugin_1  | 2020/04/18 18:14:52 driver.go:791: INFO: unlockImage(docker/53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7, docker-rbd-plugin.ceph-cluster.internal)
docker-rbd-plugin_1  | 2020/04/18 18:14:52 utils.go:73: DEBUG: shWithTimeout: 2m0s, rbd, [--pool docker --conf /etc/ceph/ceph.conf --id docker lock list 53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7]
docker-rbd-plugin_1  | 2020/04/18 18:14:52 utils.go:38: DEBUG: sh CMD: &{"/usr/bin/rbd" ["rbd" "--pool" "docker" "--conf" "/etc/ceph/ceph.conf" "--id" "docker" "lock" "list" "53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7"] [] "" <nil> <nil> <nil> [] %!q(*syscall.SysProcAttr=<nil>) %!q(*os.Process=<nil>) "<nil>" <nil> %!q(bool=false) [] [] [] [] %!q(chan error=<nil>)}
docker-rbd-plugin_1  | 2020/04/18 18:14:53 driver.go:805: DEBUG: found lines matching docker-rbd-plugin.ceph-cluster.internal:
docker-rbd-plugin_1  | [client.4288 docker-rbd-plugin.ceph-cluster.internal 192.168.32.39:0/2624029814]
docker-rbd-plugin_1  | 2020/04/18 18:14:53 utils.go:73: DEBUG: shWithTimeout: 2m0s, rbd, [--pool docker --conf /etc/ceph/ceph.conf --id docker lock rm 53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7 docker-rbd-plugin.ceph-cluster.internal client.4288]
docker-rbd-plugin_1  | 2020/04/18 18:14:53 utils.go:38: DEBUG: sh CMD: &{"/usr/bin/rbd" ["rbd" "--pool" "docker" "--conf" "/etc/ceph/ceph.conf" "--id" "docker" "lock" "rm" "53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7" "docker-rbd-plugin.ceph-cluster.internal" "client.4288"] [] "" <nil> <nil> <nil> [] %!q(*syscall.SysProcAttr=<nil>) %!q(*os.Process=<nil>) "<nil>" <nil> %!q(bool=false) [] [] [] [] %!q(chan error=<nil>)}
docker-rbd-plugin_1  | 2020/04/18 18:14:53 driver.go:203: ERROR: Unable to create Ceph RBD Image(53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7): exit status 2

and it would appear that this message is wrong, but I don't know what all that other stuff means
image

@ghost
Copy link
Author

ghost commented Apr 18, 2020

alright so I gather the last message "lock" "rm" means, from the rbd command help: lock remove (lock rm) Release a lock on an image.

so tracing back I figured out what might be the problem? Although I was having problems with using admin too but didn't confirm if the problem was the same but one would think admin would not have this problem:

root@docker-rbd-plugin:~# rbd --pool docker --conf /etc/ceph/ceph.conf  --id docker lock rm 53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7 docker-rbd-plugin.ceph-cluster.internal
rbd: locker was not specified
root@docker-rbd-plugin:~# rbd --pool docker --conf /etc/ceph/ceph.conf  --id docker lock rm 53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7 docker-rbd-plugin.ceph-cluster.internal client.4288
rbd: releasing lock failed: (13) Permission denied
2020-04-18 18:31:14.754741 7f9fb6efed40 -1 librbd: unable to blacklist client: (13) Permission denied
root@docker-rbd-plugin:~# 

@ghost
Copy link
Author

ghost commented Apr 18, 2020

I can delete it with admin,

2020-04-18 18:31:14.754741 7f9fb6efed40 -1 librbd: unable to blacklist client: (13) Permission denied
root@docker-rbd-plugin:~# rbd --pool docker --conf /etc/ceph/ceph.conf  --id admin lock rm 53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7 docker-rbd-plugin.ceph-cluster.internal client.4288
root@docker-rbd-plugin:~# rbd --pool docker --conf /etc/ceph/ceph.conf  --id docker lock list 53a58086e266ca7518da24733ed85572162e1c708794c065d975f4b5a3d7ded7

I'll try changing back to using the admin account for the plugin to see why it wasn't working then

@ghost
Copy link
Author

ghost commented Apr 18, 2020

lol

root@docker-rbd-plugin:~# rbd --pool docker --conf /etc/ceph/ceph.conf --id admin info 4280452e47d466aa3bf67f96aea1e00db568db20717e3fb3d276078d5902a645
rbd image '4280452e47d466aa3bf67f96aea1e00db568db20717e3fb3d276078d5902a645':
        size 20480 MB in 5120 objects
        order 22 (4096 kB objects)
        block_name_prefix: rbd_data.10ca74b0dc51
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        flags: 
root@docker-rbd-plugin:~# rbd --pool docker --conf /etc/ceph/ceph.conf --id admin map 4280452e47d466aa3bf67f96aea1e00db568db20717e3fb3d276078d5902a645
sh: 1: /sbin/modinfo: not found
sh: 1: /sbin/modprobe: not found
rbd: failed to load rbd kernel module (127)
rbd: sysfs write failed
In some cases useful info is found in syslog - try "dmesg | tail" or so.
rbd: map failed: (2) No such file or directory
root@docker-rbd-plugin:~# 

@ghost
Copy link
Author

ghost commented Apr 18, 2020

yeah got past that with modrpobe rbd on host but looks like I have another problem now:

docker run -it --volume-driver ceph -v test ubuntu /bin/bash
docker: Error response from daemon: create c898fee44bc53bba7b258ed517c7c432a9c8a7c2f5aa5f46b25a360be75ce7d9: Post http://%2Frun%ne exceeded.

however this one is likely due to how slow the computer I'm using is, will try raising the timeout to see but I'm gonna need this to work faster eventually...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

0 participants