Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do I verify that my rook-ceph is using bluestore #14231

Open
kubecto opened this issue May 17, 2024 · 7 comments
Open

How do I verify that my rook-ceph is using bluestore #14231

kubecto opened this issue May 17, 2024 · 7 comments
Labels

Comments

@kubecto
Copy link

kubecto commented May 17, 2024

Is this a bug report or feature request?

  • Bug Report
this is my rook
kubectl get po -n rook-ceph
NAME                                              READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-bgjmr                            2/2     Running     0          20h
csi-cephfsplugin-kg8vv                            2/2     Running     0          20h
csi-cephfsplugin-provisioner-54b6c886c7-ldhjr     5/5     Running     0          20h
csi-cephfsplugin-provisioner-54b6c886c7-rgbp9     5/5     Running     0          20h
csi-cephfsplugin-xj4rr                            2/2     Running     0          20h
csi-rbdplugin-698v4                               2/2     Running     0          20h
csi-rbdplugin-82ws6                               2/2     Running     0          20h
csi-rbdplugin-cb8fm                               2/2     Running     0          20h
csi-rbdplugin-provisioner-5685d999c4-sg5q7        5/5     Running     0          20h
csi-rbdplugin-provisioner-5685d999c4-z788l        5/5     Running     0          20h
rook-ceph-crashcollector-node1-677774f9ff-7bt4w   1/1     Running     0          20h
rook-ceph-crashcollector-node2-645cd8c45d-97dth   1/1     Running     0          20h
rook-ceph-crashcollector-node3-55548c4d64-5drdl   1/1     Running     0          20h
rook-ceph-mds-myfs-a-84f498fdbc-r2m7j             2/2     Running     0          20h
rook-ceph-mds-myfs-b-567cfbc5d6-cwjhn             2/2     Running     0          20h
rook-ceph-mgr-a-cc4c7df5-vdv5l                    3/3     Running     0          20h
rook-ceph-mgr-b-6bdd7c98-m54d5                    3/3     Running     0          20h
rook-ceph-mon-a-67457d67f7-wp6kl                  2/2     Running     0          20h
rook-ceph-mon-b-6d5ffbc847-k5czn                  2/2     Running     0          20h
rook-ceph-mon-c-54568558f5-zlcxc                  2/2     Running     0          20h
rook-ceph-operator-6fc6c6d985-f69w7               1/1     Running     0          20h
rook-ceph-osd-0-7697d45dbc-hnmvx                  2/2     Running     0          20h
rook-ceph-osd-1-5dcc6c8c44-nt8n7                  2/2     Running     0          20h
rook-ceph-osd-2-6444ccfc58-xch8h                  2/2     Running     0          20h
rook-ceph-osd-prepare-node1-mqhtm                 0/1     Completed   0          158m
rook-ceph-osd-prepare-node2-v8tkx                 0/1     Completed   0          158m
rook-ceph-osd-prepare-node3-hrjd9                 0/1     Completed   0          158m
rook-ceph-tools-operator-image-5d66b99dc-bv2hl    1/1     Running     0          20h
rook-direct-mount-544dc659cb-2b9wx                1/1     Running     0          19h

I used three nodes as ceph nodes, and sdb as the data storage disk of ceph,But why can't I see lsblk partitions for ceph to use after installing rook

lsblk
NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda               8:0    0   150G  0 disk
├─sda1            8:1    0     1G  0 part /boot
└─sda2            8:2    0   149G  0 part
  ├─centos-root 253:0    0 141.1G  0 lvm  /
  └─centos-swap 253:1    0   7.9G  0 lvm
sdb               8:16   0    16G  0 disk
sr0              11:0    1  1024M  0 rom

I can see that the container for rook-ceph-osd-0 is already set to the bluestore type and uses sdb

2024-05-17 03:47:54.988399 I | cephosd: skipping device "sda1" with mountpoint "boot"
2024-05-17 03:47:54.988407 I | cephosd: skipping device "sda2" because it contains a filesystem "LVM2_member"
2024-05-17 03:47:54.988414 I | cephosd: old lsblk can't detect bluestore signature, so try to detect here
2024-05-17 03:47:54.988460 I | cephosd: skipping device "sdb", detected an existing OSD. UUID=b724f78b-69ad-4566-8df1-0c13f80a7e49
2024-05-17 03:47:54.988465 I | cephosd: skipping device "dm-0" with mountpoint "rootfs"
2024-05-17 03:47:54.988468 I | cephosd: skipping device "dm-1" because it contains a filesystem "swap"
2024-05-17 03:47:54.993148 I | cephosd: configuring osd devices: {"Entries":{}}
2024-05-17 03:47:54.993163 I | cephosd: no new devices to configure. returning devices already configured with ceph-volume.
2024-05-17 03:47:54.993378 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log lvm list  --format json
2024-05-17 03:47:55.379137 D | cephosd: {}
2024-05-17 03:47:55.379169 I | cephosd: 0 ceph-volume lvm osd devices configured on this node
2024-05-17 03:47:55.379193 D | exec: Running command: stdbuf -oL ceph-volume --log-path /tmp/ceph-log raw list --format json
2024-05-17 03:47:55.907104 D | cephosd: {
    "b724f78b-69ad-4566-8df1-0c13f80a7e49": {
        "ceph_fsid": "a096e96e-a1db-46e4-92dd-4093b5cce441",
        "device": "/dev/sdb",
        "osd_id": 0,
        "osd_uuid": "b724f78b-69ad-4566-8df1-0c13f80a7e49",
        "type": "bluestore"
    }
}
2024-05-17 03:47:55.907200 D | exec: Running command: lsblk /dev/sdb --bytes --nodeps --pairs --paths --output SIZE,ROTA,RO,TYPE,PKNAME,NAME,KNAME,MOUNTPOINT,FSTYPE
2024-05-17 03:47:55.911838 D | sys: lsblk output: "SIZE=\"17179869184\" ROTA=\"1\" RO=\"0\" TYPE=\"disk\" PKNAME=\"\" NAME=\"/dev/sdb\" KNAME=\"/dev/sdb\" MOUNTPOINT=\"\" FSTYPE=\"\""
2024-05-17 03:47:55.911877 D | exec: Running command: sgdisk --print /dev/sdb
2024-05-17 03:47:55.915098 I | cephosd: setting device class "hdd" for device "/dev/sdb"
2024-05-17 03:47:55.915114 I | cephosd: 1 ceph-volume raw osd devices configured on this node
2024-05-17 03:47:55.915139 I | cephosd: devices = [{ID:0 Cluster:ceph UUID:b724f78b-69ad-4566-8df1-0c13f80a7e49 DevicePartUUID: DeviceClass:hdd BlockPath:/dev/sdb MetadataPath: WalPath: SkipLVRelease:true Location:root=default host=node1 LVBackedPV:false CVMode:raw Store:bluestore TopologyAffinity: Encrypted:false}]

I can see the status of the osd and do currently reference the size of the disk on my three nodes, but why can't lsblk see traces of ceph use

 ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME       STATUS  REWEIGHT  PRI-AFF
-1         0.04678  root default
-3         0.01559      host node1
 0    hdd  0.01559          osd.0       up   1.00000  1.00000
-7         0.01559      host node2
 1    hdd  0.01559          osd.1       up   1.00000  1.00000
-5         0.01559      host node3
 2    hdd  0.01559          osd.2       up   1.00000  1.00000
[rook@rook-ceph-tools-operator-image-5d66b99dc-bv2hl /]$ ceph osd status
ID  HOST    USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE
 0  node1  62.1M  15.9G      0        0       2      106   exists,up
 1  node2  62.0M  15.9G      0        0       0        0   exists,up
 2  node3  62.0M  15.9G      0        0       0        0   exists,up

this is my cluster.yaml

# cat cluster.yaml
#################################################################################################################
# Define the settings for the rook-ceph cluster with common settings for a production cluster.
# All nodes with available raw devices will be used for the Ceph cluster. At least three nodes are required
# in this example. See the documentation for more details on storage settings available.

# For example, to create the cluster:
#   kubectl create -f crds.yaml -f common.yaml -f operator.yaml
#   kubectl create -f cluster.yaml
#################################################################################################################

apiVersion: ceph.rook.io/v1
kind: CephCluster
metadata:
  name: rook-ceph
  namespace: rook-ceph # namespace:cluster
spec:
  cephVersion:
    # The container image used to launch the Ceph daemon pods (mon, mgr, osd, mds, rgw).
    # v16 is Pacific, and v17 is Quincy.
    # RECOMMENDATION: In production, use a specific version tag instead of the general v17 flag, which pulls the latest release and could result in different
    # versions running within the cluster. See tags available at https://hub.docker.com/r/ceph/ceph/tags/.
    # If you want to be more precise, you can always use a timestamp tag such quay.io/ceph/ceph:v17.2.3-20220805
    # This tag might not contain a new Ceph version, just security fixes from the underlying operating system, which will reduce vulnerabilities
    image: quay.io/ceph/ceph:v17.2.5
    # Whether to allow unsupported versions of Ceph. Currently `pacific` and `quincy` are supported.
    # Future versions such as `reef` (v18) would require this to be set to `true`.
    # Do not set to true in production.
    allowUnsupported: false
  # The path on the host where configuration files will be persisted. Must be specified.
  # Important: if you reinstall the cluster, make sure you delete this directory from each host or else the mons will fail to start on the new cluster.
  # In Minikube, the '/data' directory is configured to persist across reboots. Use "/data/rook" in Minikube environment.
  dataDirHostPath: /var/lib/rook
  # Whether or not upgrade should continue even if a check fails
  # This means Ceph's status could be degraded and we don't recommend upgrading but you might decide otherwise
  # Use at your OWN risk
  # To understand Rook's upgrade process of Ceph, read https://rook.io/docs/rook/latest/ceph-upgrade.html#ceph-version-upgrades
  skipUpgradeChecks: false
  # Whether or not continue if PGs are not clean during an upgrade
  continueUpgradeAfterChecksEvenIfNotHealthy: false
  # WaitTimeoutForHealthyOSDInMinutes defines the time (in minutes) the operator would wait before an OSD can be stopped for upgrade or restart.
  # If the timeout exceeds and OSD is not ok to stop, then the operator would skip upgrade for the current OSD and proceed with the next one
  # if `continueUpgradeAfterChecksEvenIfNotHealthy` is `false`. If `continueUpgradeAfterChecksEvenIfNotHealthy` is `true`, then operator would
  # continue with the upgrade of an OSD even if its not ok to stop after the timeout. This timeout won't be applied if `skipUpgradeChecks` is `true`.
  # The default wait timeout is 10 minutes.
  waitTimeoutForHealthyOSDInMinutes: 10
  mon:
    # Set the number of mons to be started. Generally recommended to be 3.
    # For highest availability, an odd number of mons should be specified.
    count: 3
    # The mons should be on unique nodes. For production, at least 3 nodes are recommended for this reason.
    # Mons should only be allowed on the same node for test environments where data loss is acceptable.
    allowMultiplePerNode: false
  mgr:
    # When higher availability of the mgr is needed, increase the count to 2.
    # In that case, one mgr will be active and one in standby. When Ceph updates which
    # mgr is active, Rook will update the mgr services to match the active mgr.
    count: 2
    allowMultiplePerNode: false
    modules:
      # Several modules should not need to be included in this list. The "dashboard" and "monitoring" modules
      # are already enabled by other settings in the cluster CR.
      - name: pg_autoscaler
        enabled: true
  # enable the ceph dashboard for viewing cluster status
  dashboard:
    enabled: true
    # serve the dashboard under a subpath (useful when you are accessing the dashboard via a reverse proxy)
    # urlPrefix: /ceph-dashboard
    # serve the dashboard at the given port.
    # port: 8443
    # serve the dashboard using SSL
    ssl: true
  # enable prometheus alerting for cluster
  monitoring:
    # requires Prometheus to be pre-installed
    enabled: false
  network:
    connections:
      # Whether to encrypt the data in transit across the wire to prevent eavesdropping the data on the network.
      # The default is false. When encryption is enabled, all communication between clients and Ceph daemons, or between Ceph daemons will be encrypted.
      # When encryption is not enabled, clients still establish a strong initial authentication and data integrity is still validated with a crc check.
      # IMPORTANT: Encryption requires the 5.11 kernel for the latest nbd and cephfs drivers. Alternatively for testing only,
      # you can set the "mounter: rbd-nbd" in the rbd storage class, or "mounter: fuse" in the cephfs storage class.
      # The nbd and fuse drivers are *not* recommended in production since restarting the csi driver pod will disconnect the volumes.
      encryption:
        enabled: false
      # Whether to compress the data in transit across the wire. The default is false.
      # Requires Ceph Quincy (v17) or newer. Also see the kernel requirements above for encryption.
      compression:
        enabled: false
    # enable host networking
    #provider: host
    # enable the Multus network provider
    #provider: multus
    #selectors:
      # The selector keys are required to be `public` and `cluster`.
      # Based on the configuration, the operator will do the following:
      #   1. if only the `public` selector key is specified both public_network and cluster_network Ceph settings will listen on that interface
      #   2. if both `public` and `cluster` selector keys are specified the first one will point to 'public_network' flag and the second one to 'cluster_network'
      #
      # In order to work, each selector value must match a NetworkAttachmentDefinition object in Multus
      #
      #public: public-conf --> NetworkAttachmentDefinition object name in Multus
      #cluster: cluster-conf --> NetworkAttachmentDefinition object name in Multus
    # Provide internet protocol version. IPv6, IPv4 or empty string are valid options. Empty string would mean IPv4
    #ipFamily: "IPv6"
    # Ceph daemons to listen on both IPv4 and Ipv6 networks
    #dualStack: false
  # enable the crash collector for ceph daemon crash collection
  crashCollector:
    disable: false
    # Uncomment daysToRetain to prune ceph crash entries older than the
    # specified number of days.
    #daysToRetain: 30
  # enable log collector, daemons will log on files and rotate
  logCollector:
    enabled: true
    periodicity: daily # one of: hourly, daily, weekly, monthly
    maxLogSize: 500M # SUFFIX may be 'M' or 'G'. Must be at least 1M.
  # automate [data cleanup process](https://github.com/rook/rook/blob/master/Documentation/Storage-Configuration/ceph-teardown.md#delete-the-data-on-hosts) in cluster destruction.
  cleanupPolicy:
    # Since cluster cleanup is destructive to data, confirmation is required.
    # To destroy all Rook data on hosts during uninstall, confirmation must be set to "yes-really-destroy-data".
    # This value should only be set when the cluster is about to be deleted. After the confirmation is set,
    # Rook will immediately stop configuring the cluster and only wait for the delete command.
    # If the empty string is set, Rook will not destroy any data on hosts during uninstall.
    confirmation: ""
    # sanitizeDisks represents settings for sanitizing OSD disks on cluster deletion
    sanitizeDisks:
      # method indicates if the entire disk should be sanitized or simply ceph's metadata
      # in both case, re-install is possible
      # possible choices are 'complete' or 'quick' (default)
      method: quick
      # dataSource indicate where to get random bytes from to write on the disk
      # possible choices are 'zero' (default) or 'random'
      # using random sources will consume entropy from the system and will take much more time then the zero source
      dataSource: zero
      # iteration overwrite N times instead of the default (1)
      # takes an integer value
      iteration: 1
    # allowUninstallWithVolumes defines how the uninstall should be performed
    # If set to true, cephCluster deletion does not wait for the PVs to be deleted.
    allowUninstallWithVolumes: false
  # To control where various services will be scheduled by kubernetes, use the placement configuration sections below.
  # The example under 'all' would have all services scheduled on kubernetes nodes labeled with 'role=storage-node' and
  # tolerate taints with a key of 'storage-node'.
  placement:
    all:
      nodeAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
          nodeSelectorTerms:
          - matchExpressions:
            - key: node-role.kubernetes.io/storage-node
              operator: In
              values:
              - storage-node
      podAffinity:
      podAntiAffinity:
      topologySpreadConstraints:
      tolerations:
      - key: node-role.kubernetes.io/storage-node
        operator: Exists
# The above placement information can also be specified for mon, osd, and mgr components
#    mon:
# Monitor deployments may contain an anti-affinity rule for avoiding monitor
# collocation on the same node. This is a required rule when host network is used
# or when AllowMultiplePerNode is false. Otherwise this anti-affinity rule is a
# preferred rule with weight: 50.
#    osd:
#    prepareosd:
#    mgr:
#    cleanup:
  annotations:
#    all:
#    mon:
#    osd:
#    cleanup:
#    prepareosd:
# clusterMetadata annotations will be applied to only `rook-ceph-mon-endpoints` configmap and the `rook-ceph-mon` and `rook-ceph-admin-keyring` secrets.
# And clusterMetadata annotations will not be merged with `all` annotations.
#    clusterMetadata:
#       kubed.appscode.com/sync: "true"
# If no mgr annotations are set, prometheus scrape annotations will be set by default.
#    mgr:
  labels:
#    all:
#    mon:
#    osd:
#    cleanup:
#    mgr:
#    prepareosd:
# monitoring is a list of key-value pairs. It is injected into all the monitoring resources created by operator.
# These labels can be passed as LabelSelector to Prometheus
#    monitoring:
#    crashcollector:
  resources:
# The requests and limits set here, allow the mgr pod to use half of one CPU core and 1 gigabyte of memory
#    mgr:
#      limits:
#        cpu: "500m"
#        memory: "1024Mi"
#      requests:
#        cpu: "500m"
#        memory: "1024Mi"
# The above example requests/limits can also be added to the other components
#    mon:
#    osd:
# For OSD it also is a possible to specify requests/limits based on device class
#    osd-hdd:
#    osd-ssd:
#    osd-nvme:
#    prepareosd:
#    mgr-sidecar:
#    crashcollector:
#    logcollector:
#    cleanup:
  # The option to automatically remove OSDs that are out and are safe to destroy.
  removeOSDsIfOutAndSafeToRemove: false
  priorityClassNames:
    #all: rook-ceph-default-priority-class
    mon: system-node-critical
    osd: system-node-critical
    mgr: system-cluster-critical
    #crashcollector: rook-ceph-crashcollector-priority-class
  storage: # cluster level storage configuration and selection
    useAllNodes: true
    useAllDevices: true
    #deviceFilter:
    config:
      # crushRoot: "custom-root" # specify a non-default root label for the CRUSH map
      # metadataDevice: "md0" # specify a non-rotational storage so ceph-volume will use it as block db device of bluestore.
      # databaseSizeMB: "1024" # uncomment if the disks are smaller than 100 GB
      # journalSizeMB: "1024"  # uncomment if the disks are 20 GB or smaller
      # osdsPerDevice: "1" # this value can be overridden at the node or device level
      # encryptedDevice: "true" # the default value for this option is "false"
# Individual nodes and their config can be specified as well, but 'useAllNodes' above must be set to false. Then, only the named
# nodes below will be used as storage resources.  Each node's 'name' field should match their 'kubernetes.io/hostname' label.
    nodes:
      - name: "10.102.28.61"
        devices: # specific devices to use for storage can be specified for each node
          - name: "sdb"
      - name: "10.102.28.62"
        devices: # specific devices to use for storage can be specified for each node
          - name: "sdb"
      - name: "10.102.28.63"
        devices: # specific devices to use for storage can be specified for each node
          - name: "sdb"
    #       - name: "nvme01" # multiple osds can be created on high performance devices
    #         config:
    #           osdsPerDevice: "5"
    #       - name: "/dev/disk/by-id/ata-ST4000DM004-XXXX" # devices can be specified using full udev paths
    #     config: # configuration can be specified at the node level which overrides the cluster level config
    #   - name: "172.17.4.301"
    #     deviceFilter: "^sd."
    # when onlyApplyOSDPlacement is false, will merge both placement.All() and placement.osd
    onlyApplyOSDPlacement: false
  # The section for configuring management of daemon disruptions during upgrade or fencing.
  disruptionManagement:
    # If true, the operator will create and manage PodDisruptionBudgets for OSD, Mon, RGW, and MDS daemons. OSD PDBs are managed dynamically
    # via the strategy outlined in the [design](https://github.com/rook/rook/blob/master/design/ceph/ceph-managed-disruptionbudgets.md). The operator will
    # block eviction of OSDs by default and unblock them safely when drains are detected.
    managePodBudgets: true
    # A duration in minutes that determines how long an entire failureDomain like `region/zone/host` will be held in `noout` (in addition to the
    # default DOWN/OUT interval) when it is draining. This is only relevant when  `managePodBudgets` is `true`. The default value is `30` minutes.
    osdMaintenanceTimeout: 30
    # A duration in minutes that the operator will wait for the placement groups to become healthy (active+clean) after a drain was completed and OSDs came back up.
    # Operator will continue with the next drain if the timeout exceeds. It only works if `managePodBudgets` is `true`.
    # No values or 0 means that the operator will wait until the placement groups are healthy before unblocking the next drain.
    pgHealthCheckTimeout: 0
    # If true, the operator will create and manage MachineDisruptionBudgets to ensure OSDs are only fenced when the cluster is healthy.
    # Only available on OpenShift.
    manageMachineDisruptionBudgets: false
    # Namespace in which to watch for the MachineDisruptionBudgets.
    machineDisruptionBudgetNamespace: openshift-machine-api

  # healthChecks
  # Valid values for daemons are 'mon', 'osd', 'status'
  healthCheck:
    daemonHealth:
      mon:
        disabled: false
        interval: 45s
      osd:
        disabled: false
        interval: 60s
      status:
        disabled: false
        interval: 60s
    # Change pod liveness probe timing or threshold values. Works for all mon,mgr,osd daemons.
    livenessProbe:
      mon:
        disabled: false
      mgr:
        disabled: false
      osd:
        disabled: false
    # Change pod startup probe timing or threshold values. Works for all mon,mgr,osd daemons.
    startupProbe:
      mon:
        disabled: false
      mgr:
        disabled: false
      osd:
        disabled: false

I am not sure how to manage rook because I have three data disks. Is the current situation correct? Besides, log disks are required for ceph deployment in production, how should I plan log disks for three nodes and declare them in cluster.yaml

Deviation from expected behavior:
Expect to explain

  1. Does my current rook cluster use bluestore correctly, and why can't lsblk see the disk partition used by ceph
  2. For bluestore of log disk, how should I declare in cluster.yaml? I don't understand the official document. Can you directly tell me how to adjust my cluster.yaml

Expected behavior:

How to reproduce it (minimal and precise):

File(s) to submit:

  • Cluster CR (custom resource), typically called cluster.yaml, if necessary

Logs to submit:

  • Operator's logs, if necessary

  • Crashing pod(s) logs, if necessary

    To get logs, use kubectl -n <namespace> logs <pod name>
    When pasting logs, always surround them with backticks or use the insert code button from the Github UI.
    Read GitHub documentation if you need help.

Cluster Status to submit:

  • Output of kubectl commands, if necessary

    To get the health of the cluster, use kubectl rook-ceph health
    To get the status of the cluster, use kubectl rook-ceph ceph status
    For more details, see the Rook kubectl Plugin

Environment:

  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Cloud provider or hardware configuration:
  • Rook version (use rook version inside of a Rook Pod):rook-1.10.12
  • Storage backend version (e.g. for ceph do ceph -v): ceph version 17.2.5
  • Kubernetes version (use kubectl version):1.28.6
  • Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
  • Storage backend status (e.g. for Ceph use ceph health in the Rook Ceph toolbox):
@kubecto kubecto added the bug label May 17, 2024
@travisn
Copy link
Member

travisn commented May 17, 2024

Rook only supports bluestore, so you can be sure they are all running bluestore. Rook creates OSDs with ceph-volume in "raw" mode, which means the device or partition is directly used, and no evidence seen in lsblk.

@kubecto
Copy link
Author

kubecto commented May 20, 2024

ceph deployment requires log disk and data disk, how to distinguish this

@kubecto
Copy link
Author

kubecto commented May 20, 2024

I have tried lsblk /dev/sdb and created a new partition /dev/sdb1, but it does not affect the use. In this case, what if someone uses this disk? Maybe even someone does not know that this disk has been used by ceph rook, because it is used at the bottom. Others also don't know if this data disk is being used, and it doesn't look very friendly

@travisn
Copy link
Member

travisn commented May 20, 2024

ceph deployment requires log disk and data disk, how to distinguish this

Which disks do you mean? Each OSD only requires one disk.

@kubecto
Copy link
Author

kubecto commented May 21, 2024

https://www.ibm.com/docs/en/storage-ceph/5?topic=bluestore-ceph-devices

This place has said that you can create multiple devices to store log devices, and some DB and other data

@travisn
Copy link
Member

travisn commented May 21, 2024

https://www.ibm.com/docs/en/storage-ceph/5?topic=bluestore-ceph-devices

This place has said that you can create multiple devices to store log devices, and some DB and other data

Yes, that is an option, it is just not the default. Try searching the Rook docs for "metadataDevice"

@kubecto
Copy link
Author

kubecto commented May 22, 2024

I found the file here,

https://rook.io/docs/rook/latest-release/CRDs/Cluster/host-cluster/?h=metadatadevice#specific-nodes-and-devices

According to the comment information, there is no introduction to how to use the distinction between data disks, log disks, and db disks, but there is an additional partition, and a udev device

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants