Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to deploy /dev/sdb ,/dev/sdc only use /dev/nvme0n1 as db device ? #1763

Open
akumacxd opened this issue Sep 20, 2019 · 5 comments
Open

Comments

@akumacxd
Copy link

akumacxd commented Sep 20, 2019

Description of Issue/Question

How to deploy /dev/sdb ,/dev/sdc only use /dev/nvme0n1 as db device ?
or I just want to use a single NVME device as the db device

In addition, drive groups always look for system disks /dev/sda. Can you design to ignore a particular disk?

2 hdds
Vendor: VMware
Model: VMware Virtual S
Size: 10GB

3 NVMES:
Vendor: VMware
Model: VMware Virtual NVMe Disk
Size: 20GB

lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─vgoo-lvswap 254:0    0    2G  0 lvm  [SWAP]
  └─vgoo-lvroot 254:1    0   17G  0 lvm  /
sdb               8:16   0   10G  0 disk 
sdc               8:32   0   10G  0 disk 
sr0              11:0    1 1024M  0 rom  
nvme0n1         259:0    0   20G  0 disk   <===== only this as db_device
nvme0n2         259:1    0   20G  0 disk   
nvme0n3         259:2    0   20G  0 disk 

Setup

(Please provide relevant configs and/or SLS files (Be sure to remove sensitive info).)

cat /srv/salt/ceph/configuration/files/drive_groups.yml
drive_group_hdd_nvme:
  target: 'I@roles:storage'
  data_devices:
    rotational: 1
  db_devices:
    rotational: 0
  block_db_size: '2G'

disks report show me , use two nvme disk ,

salt-run disks.report
  node003.example.com:
      |_
        - 2
        - usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
                                       [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
                                       [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
                                       [--no-auto] [--bluestore] [--filestore]
                                       [--report] [--yes] [--format {json,pretty}]
                                       [--dmcrypt]
                                       [--crush-device-class CRUSH_DEVICE_CLASS]
                                       [--no-systemd]
                                       [--osds-per-device OSDS_PER_DEVICE]
                                       [--block-db-size BLOCK_DB_SIZE]
                                       [--block-wal-size BLOCK_WAL_SIZE]
                                       [--journal-size JOURNAL_SIZE] [--prepare]
                                       [--osd-ids [OSD_IDS [OSD_IDS ...]]]
                                       [DEVICES [DEVICES ...]]
          ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda
      |_
        - 0
        - 
          Total OSDs: 1
          
          Solid State VG:
            Targets:   block.db                  Total size: 19.00 GB                 
            Total LVs: 1                         Size per LV: 1.86 GB                  
            Devices:   /dev/nvme0n2
          
            Type            Path                                                    LV Size         % of device
          ----------------------------------------------------------------------------------------------------
            [data]          /dev/sdb                                                9.00 GB         100.0%
            [block.db]      vg: vg/lv                                               1.86 GB         10%
      |_
        - 0
        - 
          Total OSDs: 1
          
          Solid State VG:
            Targets:   block.db                  Total size: 19.00 GB                 
            Total LVs: 1                         Size per LV: 1.86 GB                  
            Devices:   /dev/nvme0n3
          
            Type            Path                                                    LV Size         % of device
          ----------------------------------------------------------------------------------------------------
            [data]          /dev/sdc                                                9.00 GB         100.0%
            [block.db]      vg: vg/lv                                               1.86 GB         10%

-----------------------------------------------------------------------------------------
cat /srv/salt/ceph/configuration/files/drive_groups.yml    
drive_group_hdd_nvme:
  target: 'I@roles:storage'
  data_devices:
    rotational: 1            
  db_devices:
    rotational: 0     
    limit: 1      <==== use limit
  block_db_size: '2G'
  osds_per_device: 1
salt-run disks.report
  node002.example.com:
      |_
        - 2
        - usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
                                       [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
                                       [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
                                       [--no-auto] [--bluestore] [--filestore]
                                       [--report] [--yes] [--format {json,pretty}]
                                       [--dmcrypt]
                                       [--crush-device-class CRUSH_DEVICE_CLASS]
                                       [--no-systemd]
                                       [--osds-per-device OSDS_PER_DEVICE]
                                       [--block-db-size BLOCK_DB_SIZE]
                                       [--block-wal-size BLOCK_WAL_SIZE]
                                       [--journal-size JOURNAL_SIZE] [--prepare]
                                       [--osd-ids [OSD_IDS [OSD_IDS ...]]]
                                       [DEVICES [DEVICES ...]]
          ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda
  node003.example.com:
      |_
        - 2
        - usage: ceph-volume lvm batch [-h] [--db-devices [DB_DEVICES [DB_DEVICES ...]]]
                                       [--wal-devices [WAL_DEVICES [WAL_DEVICES ...]]]
                                       [--journal-devices [JOURNAL_DEVICES [JOURNAL_DEVICES ...]]]
                                       [--no-auto] [--bluestore] [--filestore]
                                       [--report] [--yes] [--format {json,pretty}]
                                       [--dmcrypt]
                                       [--crush-device-class CRUSH_DEVICE_CLASS]
                                       [--no-systemd]
                                       [--osds-per-device OSDS_PER_DEVICE]
                                       [--block-db-size BLOCK_DB_SIZE]
                                       [--block-wal-size BLOCK_WAL_SIZE]
                                       [--journal-size JOURNAL_SIZE] [--prepare]
                                       [--osd-ids [OSD_IDS [OSD_IDS ...]]]
                                       [DEVICES [DEVICES ...]]
          ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda
admin:~ # 

Versions Report

(Provided by running:

`salt-run deepsea.version`  0.9.23+git.0.6a24f24a0

rpm -qi salt-minion

 rpm -qi salt-minion
Name        : salt-minion
Version     : 2019.2.0
Release     : 6.3.5
Architecture: x86_64
Install Date: Fri Sep 20 09:14:33 2019
Group       : System/Management
Size        : 41019
License     : Apache-2.0
Signature   : RSA/SHA256, Tue May 28 23:28:21 2019, Key ID 70af9e8139db7c82
Source RPM  : salt-2019.2.0-6.3.5.src.rpm
Build Date  : Tue May 28 23:24:20 2019
Build Host  : sheep28
Relocations : (not relocatable)
Packager    : https://www.suse.com/
Vendor      : SUSE LLC 
URL         : http://saltstack.org/
Summary     : The client component for Saltstack
Description :
Salt minion is queried and controlled from the master.
Listens to the salt master and execute the commands.
Distribution: SUSE Linux Enterprise 15

rpm -qi salt-master

 rpm -qi salt-master
Name        : salt-master
Version     : 2019.2.0
Release     : 6.3.5
Architecture: x86_64
Install Date: Fri Sep 20 09:14:34 2019
Group       : System/Management
Size        : 2936818
License     : Apache-2.0
Signature   : RSA/SHA256, Tue May 28 23:28:21 2019, Key ID 70af9e8139db7c82
Source RPM  : salt-2019.2.0-6.3.5.src.rpm
Build Date  : Tue May 28 23:24:20 2019
Build Host  : sheep28
Relocations : (not relocatable)
Packager    : https://www.suse.com/
Vendor      : SUSE LLC 
URL         : http://saltstack.org/
Summary     : The management component of Saltstack with zmq protocol supported
Description :
The Salt master is the central server to which all minions connect.
Enabled commands to remote systems to be called in parallel rather
than serially.
Distribution: SUSE Linux Enterprise 15
)
@jschmid1
Copy link
Contributor

@akumacxd You might try the limit key like you did in the last example.

drive_group_hdd_nvme:
  target: 'I@roles:storage'
  data_devices:
    rotational: 1
  db_devices:
    rotational: 0
    limit: 1
block_db_size: '2G'

But like ceph-volume complains, please remove any GTP headers from the disks.

ceph-volume lvm batch: error: GPT headers found, they must be removed on: /dev/sda

Those probably need to be removed form sda/b and all the nvme devices.

HTH

@akumacxd
Copy link
Author

akumacxd commented Sep 21, 2019

how to remove /dev/sda ? /dev/sda is OS

node004:~ # lsblk 
NAME                            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                               8:0    0   20G  0 disk 
├─sda1                            8:1    0    1G  0 part /boot    
└─sda2                            8:2    0   19G  0 part 
  ├─vgoo-lvroot                 254:0    0   17G  0 lvm  /
  └─vgoo-lvswap                 254:1    0    2G  0 lvm  [SWAP]

@akumacxd
Copy link
Author

akumacxd commented Sep 21, 2019

if add an OSD disk to a node , drive group will not find nvme disk (nvme0n2)
nvme0n2 is DB device

node004:~ # lsblk 
NAME                                                                                                                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                                     8:0    0   20G  0 disk 
├─sda1                                                                                                                  8:1    0    1G  0 part /boot
└─sda2                                                                                                                  8:2    0   19G  0 part 
  ├─vgoo-lvroot                                                                                                       254:0    0   17G  0 lvm  /
  └─vgoo-lvswap                                                                                                       254:1    0    2G  0 lvm  [SWAP]
sdb                                                                                                                     8:16   0   10G  0 disk 
└─ceph--block--0515f9d7--3407--46a5--be68--db80fc789dcc-osd--block--9a914f7d--ae9c--451a--ac7e--bcb6cb1fc926          254:4    0    9G  0 lvm  
sdc                                                                                                                     8:32   0   10G  0 disk 
└─ceph--block--9f7394b2--3ad3--4cd8--8267--7e5993af1271-osd--block--79f5920f--b41c--4dd0--94e9--dc85dbb2e7e4          254:5    0    9G  0 lvm  
sdd        <== new disk                                                                                          8:48   0   10G  0 disk 
sr0                                                                                                                    11:0    1 1024M  0 rom  
nvme0n1                                                                                                               259:0    0   20G  0 disk 
nvme0n2                                                                                                               259:1    0   20G  0 disk 
├─ceph--block--dbs--57d07a01--4440--4892--b44c--eae536613586-osd--block--db--2b295cc9--caff--45ad--a179--d7e3ba46a39d 254:2    0    1G  0 lvm  
└─ceph--block--dbs--57d07a01--4440--4892--b44c--eae536613586-osd--block--db--2244293e--ca96--4847--a5cb--9112f59836fa 254:3    0    1G  0 lvm  
nvme0n3                                                                                                               259:2    0   20G  0 disk 
cat /srv/salt/ceph/configuration/files/drive_groups.yml
# This is the default configuration and
# will create an OSD on all available drives
drive_group_hdd_nvme:       
  target: 'I@roles:storage'
  data_devices:
    size: '9GB:12GB'              
  db_devices:
    rotational: 0      
    limit: 1    
  block_db_size: '2G'
admin:~ # salt-run disks.report 
  node004.example.com:
      |_
        - 0
        - 
          Total OSDs: 1
          
            Type            Path                                                    LV Size         % of device
          ----------------------------------------------------------------------------------------------------
            [data]          /dev/sdd                                                9.00 GB         100.0%
admin:~ # salt-run state.orch ceph.stage.3

node004: # lsblk 
NAME                                                                                                                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                                     8:0    0   20G  0 disk 
├─sda1                                                                                                                  8:1    0    1G  0 part /boot
└─sda2                                                                                                                  8:2    0   19G  0 part 
  ├─vgoo-lvroot                                                                                                       254:0    0   17G  0 lvm  /
  └─vgoo-lvswap                                                                                                       254:1    0    2G  0 lvm  [SWAP]
sdb                                                                                                                     8:16   0   10G  0 disk 
└─ceph--block--0515f9d7--3407--46a5--be68--db80fc789dcc-osd--block--9a914f7d--ae9c--451a--ac7e--bcb6cb1fc926          254:4    0    9G  0 lvm  
sdc                                                                                                                     8:32   0   10G  0 disk 
└─ceph--block--9f7394b2--3ad3--4cd8--8267--7e5993af1271-osd--block--79f5920f--b41c--4dd0--94e9--dc85dbb2e7e4          254:5    0    9G  0 lvm  
sdd                                                                                                                     8:48   0   10G  0 disk 
└─ceph--dc28a338--71c6--4d73--8838--ee098719571b-osd--data--ce8df5f1--7b2e--4641--80e0--7f0e44dee652                  254:6    0    9G  0 lvm  
sr0                                                                                                                    11:0    1 1024M  0 rom  
nvme0n1                                                                                                               259:0    0   20G  0 disk 
nvme0n2                                                                                                               259:1    0   20G  0 disk 
├─ceph--block--dbs--57d07a01--4440--4892--b44c--eae536613586-osd--block--db--2b295cc9--caff--45ad--a179--d7e3ba46a39d 254:2    0    1G  0 lvm  
└─ceph--block--dbs--57d07a01--4440--4892--b44c--eae536613586-osd--block--db--2244293e--ca96--4847--a5cb--9112f59836fa 254:3    0    1G  0 lvm  
nvme0n3                                                                                                               259:2    0   20G  0 disk 

@akumacxd
Copy link
Author

At present, all my nodes have three NVME devices, the same model and size. I want to specify nvme0n1 as db device, nvme0n2 as RGW index, and the last nvme0n3 as LVM Cache. How to configure drive group?

@akumacxd
Copy link
Author

akumacxd commented Sep 21, 2019

The following way to create OSD manually step by step to create, drive group through this way to create OSD ?

(1)node004 disk layout

# lsblk 
NAME                                                                                                                  MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                                                                     8:0    0   20G  0 disk 
├─sda1                                                                                                                  8:1    0    1G  0 part /boot
└─sda2                                                                                                                  8:2    0   19G  0 part 
  ├─vgoo-lvroot                                                                                                       254:0    0   17G  0 lvm  /
  └─vgoo-lvswap                                                                                                       254:1    0    2G  0 lvm  [SWAP]
sdb                                                                                                                     8:16   0   10G  0 disk 
└─ceph--block--0515f9d7--3407--46a5--be68--db80fc789dcc-osd--block--9a914f7d--ae9c--451a--ac7e--bcb6cb1fc926          254:4    0    9G  0 lvm  
sdc                                                                                                                     8:32   0   10G  0 disk 
└─ceph--block--9f7394b2--3ad3--4cd8--8267--7e5993af1271-osd--block--79f5920f--b41c--4dd0--94e9--dc85dbb2e7e4          254:5    0    9G  0 lvm  
sdd                                                                                                                     8:48   0   10G  0 disk 
sr0                                                                                                                    11:0    1 1024M  0 rom  
nvme0n1                                                                                                               259:0    0   20G  0 disk 
nvme0n2                                                                                                               259:1    0   20G  0 disk 
├─ceph--block--dbs--57d07a01--4440--4892--b44c--eae536613586-osd--block--db--2b295cc9--caff--45ad--a179--d7e3ba46a39d 254:2    0    1G  0 lvm  
└─ceph--block--dbs--57d07a01--4440--4892--b44c--eae536613586-osd--block--db--2244293e--ca96--4847--a5cb--9112f59836fa 254:3    0    1G  0 lvm  
nvme0n3                                                                                                               259:2    0   20G  0 disk 

(2)LVS VGS Information

# lvs
  LV                                                VG                                                  Attr       LSize  
  osd-block-9a914f7d-ae9c-451a-ac7e-bcb6cb1fc926    ceph-block-0515f9d7-3407-46a5-be68-db80fc789dcc     -wi-ao----  9.00g                                                    
  osd-block-79f5920f-b41c-4dd0-94e9-dc85dbb2e7e4    ceph-block-9f7394b2-3ad3-4cd8-8267-7e5993af1271     -wi-ao----  9.00g                                                    
  osd-block-db-2244293e-ca96-4847-a5cb-9112f59836fa ceph-block-dbs-57d07a01-4440-4892-b44c-eae536613586 -wi-ao----  1.00g                                                    
  osd-block-db-2b295cc9-caff-45ad-a179-d7e3ba46a39d ceph-block-dbs-57d07a01-4440-4892-b44c-eae536613586 -wi-ao----  1.00g                                                    
  osd-block-db-test                                 ceph-block-dbs-57d07a01-4440-4892-b44c-eae536613586 -wi-a-----  2.00g                                                    
  lvroot                                            vgoo                                                -wi-ao---- 17.00g                                                    
  lvswap                                            vgoo                                                -wi-ao----  2.00g 
  

(3)create the logical volumes for data block:

# vgcreate ceph-block-0 /dev/sdd
# lvcreate -l 100%FREE -n block-0 ceph-block-0

(4)create the logical volumes for db/wal block:

# lvcreate -L 2GB -n db-0 ceph-block-dbs-57d07a01-4440-4892-b44c-eae536613586

(5)LVS VGS Information

# lvs
  LV                                                VG                                                  Attr       LSize  
  block-0                                           ceph-block-0                                        -wi-a----- 10.00g                                                    
  osd-block-9a914f7d-ae9c-451a-ac7e-bcb6cb1fc926    ceph-block-0515f9d7-3407-46a5-be68-db80fc789dcc     -wi-ao----  9.00g                                                    
  osd-block-79f5920f-b41c-4dd0-94e9-dc85dbb2e7e4    ceph-block-9f7394b2-3ad3-4cd8-8267-7e5993af1271     -wi-ao----  9.00g   
  db-0                                              ceph-block-dbs-57d07a01-4440-4892-b44c-eae536613586 -wi-a-----  2.00g  
  osd-block-db-2244293e-ca96-4847-a5cb-9112f59836fa ceph-block-dbs-57d07a01-4440-4892-b44c-eae536613586 -wi-ao----  1.00g                                                    
  osd-block-db-2b295cc9-caff-45ad-a179-d7e3ba46a39d ceph-block-dbs-57d07a01-4440-4892-b44c-eae536613586 -wi-ao----  1.00g                                                    
  lvroot                                            vgoo                                                -wi-ao---- 17.00g                                                    
  lvswap                                            vgoo                                                -wi-ao----  2.00g  

(6)create the OSDs with ceph-volume:

# ceph-volume lvm create --bluestore --data ceph-block-0/block-0 --block.db ceph-block-dbs-57d07a01-4440-4892-b44c-eae536613586/db-0
# ceph osd tree
ID CLASS WEIGHT  TYPE NAME        STATUS REWEIGHT PRI-AFF 
-1       0.07837 root default                             
-7       0.01959     host node001                         
 2   hdd 0.00980         osd.2        up  1.00000 1.00000 
 5   hdd 0.00980         osd.5        up  1.00000 1.00000 
-3       0.01959     host node002                         
 0   hdd 0.00980         osd.0        up  1.00000 1.00000 
 3   hdd 0.00980         osd.3        up  1.00000 1.00000 
-5       0.01959     host node003                         
 1   hdd 0.00980         osd.1        up  1.00000 1.00000 
 4   hdd 0.00980         osd.4        up  1.00000 1.00000 
-9       0.01959     host node004                         
 6   hdd 0.00980         osd.6        up  1.00000 1.00000 
 7   hdd 0.00980         osd.7        up  1.00000 1.00000 
 8   hdd       0         osd.8        up  1.00000 1.00000 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants