Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Need advice, and maybe some shares for ceph osd upgrade in containers #1714

Closed
LeoQuote opened this issue Aug 1, 2020 · 2 comments
Closed

Comments

@LeoQuote
Copy link

LeoQuote commented Aug 1, 2020

Hi,I'm planning to upgrade my cluster from mimic to nautilus and have been studying the "ceph-volume" deprecation thing since last month, but I still cannot find a proper way to deal with it. In this issue #1324 ceph-disk would failed , so I have to edit the entrypoint , using ceph-volume to prepare and activate osd, like this in #1713

$ docker run --rm --privileged --net=host --ipc=host \
                    -v /run/lock/lvm:/run/lock/lvm:z \
                    -v /var/run/udev/:/var/run/udev/:z \
                    -v /dev:/dev -v /etc/ceph:/etc/ceph:z \
                    -v /run/lvm/:/run/lvm/ \
                    -v /var/lib/ceph/:/var/lib/ceph/:z \
                    -v /var/log/ceph/:/var/log/ceph/:z \
                    --entrypoint=ceph-volume \
                    docker.io/ceph/daemon:latest-octopus \
                    --cluster ceph lvm prepare --bluestore --data /dev/xxxxxx
// assuming the OSD id created is 0
$ docker run --rm --privileged --net=host --pid=host --ipc=host \
                    -v /dev:/dev \
                    -v /etc/localtime:/etc/localtime:ro \
                    -v /var/lib/ceph:/var/lib/ceph:z \
                    -v /etc/ceph:/etc/ceph:z \
                    -v /var/run/ceph:/var/run/ceph:z \
                    -v /var/run/udev/:/var/run/udev/ \
                    -v /var/log/ceph:/var/log/ceph:z \
                    -v /run/lvm/:/run/lvm/ \
                    -e CLUSTER=ceph \
                    -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \
                    -e CONTAINER_IMAGE=docker.io/ceph/daemon:latest-octopus \
                    -e OSD_ID=0 \
                    --name=ceph-osd-0 \
                    docker.io/ceph/daemon:latest-octopus

But it seems this is for brand new osd instead of existing osd created previously in mimic. And according to the doc https://docs.ceph.com/docs/master/releases/nautilus/#upgrading-from-mimic-or-luminous We recommend you avoid adding or replacing any OSDs while the upgrade is in progress. meaning I cannot replace the existing osd to lvm style osd during the upgrade process.

So what should I do to upgrade it?

What entrypoint should I use to start ceph osd in nautilus

Would it be running ok just using the old way? like :

docker run \
--net=host \
--name=ceph-osd\
--privileged=true \
--pid=host \
-v /etc/ceph:/etc/ceph \
-v /var/lib/ceph/:/var/lib/ceph/ \
-v /dev/:/dev/ \
-e CLUSTER=ceph \
-e OSD_DEVICE=/dev/sdb \
-e OSD_TYPE=disk \
ceph/daemon osd

I assume it will not start properly, cause these code is still using the deprecated ceph-disk command.

The ceph document also mentioned you need ceph-volume simple scan and ceph-volume simple activate --all if you want to keep ceph disk as it is , I assume that means I need to operate these in the running mimic osd container.

And another question is

Is the osd id required to start an osd daemon ? In mimic I the only parameter I would use is the raw device path, Is there a way to start osd daemon the old way, only passing the device path and everything's working? Is #1681 related to this question?

Environment:

  • OS (e.g. from /etc/os-release): gentoo
  • Kernel (e.g. uname -a):
  • Docker version (e.g. docker version):
  • Ceph version (e.g. ceph -v): mimic
@dsavineau
Copy link
Contributor

Using the OSD_CEPH_VOLUME_ACTIVATE entrypoint with a ceph-disk based OSD will work because this entrypoint is able to managed both ceph-volume and ceph-disk based OSD.
When using a ceph-disk based OSD, it will use the ceph-volume simple scan/activate commands. [1][2]

$ docker run --rm --privileged --net=host --pid=host --ipc=host \
                    -v /dev:/dev \
                    -v /etc/localtime:/etc/localtime:ro \
                    -v /var/lib/ceph:/var/lib/ceph:z \
                    -v /etc/ceph:/etc/ceph:z \
                    -v /var/run/ceph:/var/run/ceph:z \
                    -v /var/run/udev/:/var/run/udev/ \
                    -v /var/log/ceph:/var/log/ceph:z \
                    -v /run/lvm/:/run/lvm/ \
                    -e CLUSTER=ceph \
                    -e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \
                    -e CONTAINER_IMAGE=docker.io/ceph/daemon:latest-nautilus \
                    -e OSD_ID=0 \
                    --name=ceph-osd-0 \
                    docker.io/ceph/daemon:latest-nautilus

[1] https://github.com/ceph/ceph-container/blob/stable-4.0/src/daemon/osd_scenarios/osd_volume_activate.sh#L85-L96
[2] https://github.com/ceph/ceph-container/blob/stable-4.0/src/daemon/osd_scenarios/osd_volume_activate.sh#L4-L46

@LeoQuote
Copy link
Author

LeoQuote commented Aug 4, 2020

Ahhh, I finished my upgrade at Tuesday, thanks for your reply.

For upgrading from mimic, I switched from Ceph-disk to Ceph-volume first,by using the osd_ceph_volume_activate entrypoint and set OSD_TYPE=simple , OSD_ID=0

The final docker run command would like the command provided by @dsavineau

Then you can follow the upgrade manual to upgrade all ceph components. Just replace the images and all just works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants