Skip to content
This repository has been archived by the owner on Jun 18, 2022. It is now read-only.

RBD driver support #15

Open
leseb opened this issue Aug 21, 2015 · 22 comments
Open

RBD driver support #15

leseb opened this issue Aug 21, 2015 · 22 comments

Comments

@leseb
Copy link

leseb commented Aug 21, 2015

Related projects:

@tomzo
Copy link

tomzo commented Sep 13, 2015

It seems ceph RBD would fit very well as driver. I'd love to see support for it.

@pwFoo
Copy link

pwFoo commented Sep 26, 2015

+1
Ceph driver would be great!

@pwFoo
Copy link

pwFoo commented Oct 8, 2015

Any plans to add ceph support? would be important for some setups.

@tlvenn
Copy link

tlvenn commented Oct 11, 2015

+1

@pwFoo
Copy link

pwFoo commented Oct 15, 2015

Another ceph plugin, but with last commit 7 months ago: rbd-volume - NON-FUNCTIONAL
https://github.com/ceph/ceph-docker/tree/master/rbd-volume

@ps-account
Copy link

+1
For those wanting to try ceph for docker without convoy, look at the page below, using https://github.com/yp-engineering/rbd-docker-plugin . I tested on docker 1.9.0 where it seems to work fine.

http://www.sebastien-han.fr/blog/2015/08/17/getting-started-with-the-docker-rbd-volume-plugin/

@hekaldama
Copy link

@pimpim please let us know how you like it and if you need any features from it.

@ps-account
Copy link

@hekaldama I am currently trying out the yp-engineering with a proof-of-concept ceph installation, no authentication or users inside of the ceph cluster. Maybe I should give some feedback on their project page as well, but since you ask for it:

I really like the plugin's ability for flexible creating of volumes of a certain size in ceph: e.g. the "@512" option to create a 512 MB sized volume works great, could also make volumes larger than the default setting (in this case 20GB), maybe nicer to even use a M/G/T suffix to allow creating volumes between mega and terabytes without getting crazy long numbers.

I also like that I can load several plugins per host, so I can use different pools simultaneously within the same docker instance. I wonder if you could even get pools from different ceph clusters, but then the host should be a client of different ceph clusters, not sure if that is possible.

The options to delete a scratch volume directly after use or rename them for later cleanup look promising.

Something I would like to see, but maybe that is just my weird use case - if there are different users on a docker host and each user corresponds to a different ceph user, it would be nice to allow the authentication at the user level during " docker run". Currently, all users that have access to the docker host basically share the access credentials to the ceph pools as given during the loading of the plugin on the host. But that's a similar issue that NFS and CIFS plugins have.

@hekaldama
Copy link

@pimpim added some tickets(yp-engineering/rbd-docker-plugin#11, yp-engineering/rbd-docker-plugin#12) to our project for your suggested enhancements.

As far auth during docker run, sounds like you are saying similar thing as yp-engineering/rbd-docker-plugin#10? If so, please comment on that ticket with any other suggestions and such.

Thanks.

@ancientz
Copy link

+1, its a must for us, we currently have to pin services to hosts.

@magnus919
Copy link

+1 this would be fantastic!

@wanghaisheng
Copy link

anything new?

@hwinkel
Copy link

hwinkel commented Apr 1, 2016

+1 just starting to have a ceph cluster running. Rancher Support would be great

@taketnoi
Copy link

taketnoi commented Aug 4, 2016

+1 for RBD Ceph support in Convoy

@pkalemba
Copy link

+1 any news about it?

@diogogmt
Copy link

diogogmt commented Oct 3, 2016

Any plans to add support for Ceph on Convoy in the near future?

@feffi
Copy link

feffi commented Oct 10, 2016

+2

@iakat
Copy link

iakat commented Oct 12, 2016

+1

@iclouding
Copy link

+1 for RBD Ceph support in Convoy

@bryanrossUK
Copy link

+1

@pierreozoux
Copy link

@ALL please +1 is so 2015... We are almost in 2017, didn't you see yet the button "add your reaction"? It is on the top right of each card!

@magnus919
Copy link

@pierreozoux this issue is also 2015.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests