-
Notifications
You must be signed in to change notification settings - Fork 512
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need advice, and maybe some shares for ceph osd upgrade in containers #1714
Comments
Using the $ docker run --rm --privileged --net=host --pid=host --ipc=host \
-v /dev:/dev \
-v /etc/localtime:/etc/localtime:ro \
-v /var/lib/ceph:/var/lib/ceph:z \
-v /etc/ceph:/etc/ceph:z \
-v /var/run/ceph:/var/run/ceph:z \
-v /var/run/udev/:/var/run/udev/ \
-v /var/log/ceph:/var/log/ceph:z \
-v /run/lvm/:/run/lvm/ \
-e CLUSTER=ceph \
-e CEPH_DAEMON=OSD_CEPH_VOLUME_ACTIVATE \
-e CONTAINER_IMAGE=docker.io/ceph/daemon:latest-nautilus \
-e OSD_ID=0 \
--name=ceph-osd-0 \
docker.io/ceph/daemon:latest-nautilus [1] https://github.com/ceph/ceph-container/blob/stable-4.0/src/daemon/osd_scenarios/osd_volume_activate.sh#L85-L96 |
Ahhh, I finished my upgrade at Tuesday, thanks for your reply. For upgrading from mimic, I switched from Ceph-disk to Ceph-volume first,by using the osd_ceph_volume_activate entrypoint and set OSD_TYPE=simple , OSD_ID=0 The final docker run command would like the command provided by @dsavineau Then you can follow the upgrade manual to upgrade all ceph components. Just replace the images and all just works. |
Hi,I'm planning to upgrade my cluster from mimic to nautilus and have been studying the "ceph-volume" deprecation thing since last month, but I still cannot find a proper way to deal with it. In this issue #1324
ceph-disk
would failed , so I have to edit the entrypoint , using ceph-volume to prepare and activate osd, like this in #1713But it seems this is for brand new osd instead of existing osd created previously in mimic. And according to the doc https://docs.ceph.com/docs/master/releases/nautilus/#upgrading-from-mimic-or-luminous
We recommend you avoid adding or replacing any OSDs while the upgrade is in progress.
meaning I cannot replace the existing osd tolvm
style osd during the upgrade process.So what should I do to upgrade it?
What entrypoint should I use to start ceph osd in nautilus
Would it be running ok just using the old way? like :
I assume it will not start properly, cause these code is still using the deprecated ceph-disk command.
The ceph document also mentioned you need
ceph-volume simple scan
andceph-volume simple activate --all
if you want to keep ceph disk as it is , I assume that means I need to operate these in the running mimic osd container.And another question is
Is the osd id required to start an osd daemon ? In mimic I the only parameter I would use is the raw device path, Is there a way to start osd daemon the old way, only passing the device path and everything's working? Is #1681 related to this question?
Environment:
uname -a
):docker version
):ceph -v
): mimicThe text was updated successfully, but these errors were encountered: