Connecting and Configuring Ceph RBD Using a Linux Client
Ceph RBD (RADOS Block Device) offers users a network block device that appears as a local disk on the system where it is connected. The block device is entirely managed by the user, who can create a file system on it and use it according to his needs.
Advantages of RBD
- Ability to resize the block device image.
- Import / export of block device images.
- Stripping and replication within the cluster.
- Capability to create read-only snapshots and restore them (for RBD level snapshots, please contact us).
- Ability to connect using a Linux or QEMU KVM client
Setting Up the RBD Client (Linux)
To connect to RBD, it is recommended to use a newer kernel version on your system, as older kernel versions have deprecated RBD connection modules, meaning not all advanced features are supported. Developers suggest using at least kernel version 5.0 or higher. However, some functionalities have been backported to the CentOS 7 core.
Ceph client version For optimal functionality, it is highly recommended to use the same version of Ceph tools as the one currently running on our clusters. Then you can set up the appropriate repositories, as outlined below.
CentOS Setup
First, install the release.asc key for the Ceph repository.
sudo rpm --import 'https://download.ceph.com/keys/release.asc'In the directory /etc/yum.repos.d/ create a text file ceph.repo and fill in the record for Ceph instruments.
[ceph]
name=Ceph packages for $basearch
baseurl=https://download.ceph.com/<contact-admin-for-current-version>/el7/$basearch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.ascSome packages from the Ceph repository also require third-party libraries for proper functioning, so add the EPEL repository.
CentOS 7
sudo yum install -y epel-releaseCentOS 8
sudo dnf install -y epel-releaseRedHat 7
sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpmFinally, install the basic tools for Ceph which also include RBD support.
CentOS 7
sudo yum install ceph-commonOn CentOS 8
sudo dnf install ceph-commonUbuntu/Debian Setup
Ubuntu/Ceph includes all necessary packages natively, so you can just run following command:
sudo apt install cephRBD Configuration and Mapping
To configure and connect to RBD, use the credentials provided by the system administrator. The necessary details are as follows:
- pool name: rbd_vo_poolname
- image name: vo_name_username
- keyring: [client.rbd_user] key = key_hash ==
In the directory /etc/ceph/ create a text file ceph.conf with the following content.
CL1 Data Storage
[global]
fsid = 19f6785a-70e1-45e8-a23a-5cff0c39aa54
mon_host = [v2:78.128.244.33:3300,v1:78.128.244.33:6789],[v2:78.128.244.37:3300,v1:78.128.244.37:6789],[v2:78.128.244.41:3300,v1:78.128.244.41:6789]
auth_client_required = cephxCL2 Data Storage
[global]
fsid = 3ea58563-c8b9-4e63-84b0-a504a5c71f76
mon_host = [v2:78.128.244.65:3300/0,v1:78.128.244.65:6789/0],[v2:78.128.244.69:3300/0,v1:78.128.244.69:6789/0],[v2:78.128.244.71:3300/0,v1:78.128.244.71:6789/0]
auth_client_required = cephxCL3 Data Storage
[global]
fsid = b16aa2d2-fbe7-4f35-bc2f-3de29100e958
mon_host = [v2:78.128.244.240:3300/0,v1:78.128.244.240:6789/0],[v2:78.128.244.241:3300/0,v1:78.128.244.241:6789/0],[v2:78.128.244.242:3300/0,v1:78.128.244.242:6789/0]
auth_client_required = cephxCL4 Data Storage
[global]
fsid = c4ad8c6f-7ef3-4b0e-873c-b16b00b5aac4
mon_host = [v2:78.128.245.29:3300/0,v1:78.128.245.29:6789/0] [v2:78.128.245.30:3300/0,v1:78.128.245.30:6789/0] [v2:78.128.245.31:3300/0,v1:78.128.245.31:6789/0]
auth_client_required = cephxCL5 Data Storage
[global]
fsid = c581dace-40ff-4519-878b-c0ffeec0ffee
mon_host = [v2:78.128.245.157:3300/0,v1:78.128.245.157:6789/0] [v2:78.128.245.158:3300/0,v1:78.128.245.158:6789/0] [v2:78.128.245.159:3300/0,v1:78.128.245.159:6789/0]
auth_client_required = cephxNext, in the /etc/ceph/ directory, create a text file ceph.keyring. Then, save the keyring in that file, as shown in the example below.
[client.rbd_user]
key = sdsaetdfrterp+sfsdM3iKY5teisfsdXoZ5==Now RBD mapping can be performed (rbd_user is a string originating from the keyring, after stripping the string client.
sudo rbd --id rbd_user --exclusive device map name_pool/name_imageWe strongly recommend using the --exclusive option when mapping the RBD image. This option prevents the image from being mapped to multiple devices or mapped multiple times locally, which could lead to data corruption. Please be aware that if there is any risk of multiple mapping, you should use the --exclusive option.
However, do not use the —exclusive option if you need to mount the RBD image on multiple machines—for example, in a clustered file system setup.
If the files ceph.conf and username.keyring are located in a directory other than the default /etc/ceph/, you must specify the corresponding paths during the mapping process. See the example below.
sudo rbd -c /home/username/ceph/ceph.conf -k /home/username/ceph/username.keyring --id rbd_user --exclusive device map name_pool/name_imageNext, check the connection in kernel messages.
dmesgNow, check the RBD status.
sudo rbd device list | grep "name_image"Encrypting and Creating a Filesystem
The next step is to encrypt the mapped image use cryptsetup-luks.
sudo yum install cryptsetup-luksThen, it encrypts the device.
sudo cryptsetup -s 512 luksFormat --type luks2 /dev/rbdXFinally, check the settings.
sudo cryptsetup luksDump /dev/rbdXTo perform further actions on an encrypted device, it must be decrypted first.
sudo cryptsetup luksOpen /dev/rbdX luks_rbdXWe recommend using XFS instead of EXT4 for larger images or those they may need to exceed 200TB over time, as EXT4 has a limit on the number of inodes.
Now, create file system on the device, here is an example xfs.
sudo mkfs.xfs -K /dev/mapper/luks_rbdXIf you use XFS, do not use the nobarrier option while mounting, it could cause data loss!
Once the file system is ready, we can mount the device to a pre-created folder in /mnt/.
sudo mount /dev/mapper/luks_rbdX /mnt/rbdEnding the Work with RBD
Unmount the volume.
sudo umount /mnt/rbd/Close the encrypted volume.
sudo cryptsetup luksClose /dev/mapper/luks_rbdXVolume unmapping.
sudo rbd --id rbd_user device unmap /dev/rbdX/To optimize performance, choose an appropriate size for the read_ahead cache based on your system´s memory size.
Example for 8GB:
echo 8388608 > /sys/block/rbd0/queue/read_ahead_kbExample for 512MB:
echo 524288 > /sys/block/rbd0/queue/read_ahead_kbTo apply the changes, you need to unmap the image and then map it again.
The method described above is not persistent (it will not survive a reboot). To make it persistent, you must add the following line to the “/etc/udev/rules.d/50-read-ahead-kb.rules” file.
You can configure specific kernel parameters for a subset of block devices (Ceph RBD)
KERNEL=="rbd[0-9]*", ENV{DEVTYPE}=="disk", ACTION=="add|change", ATTR{bdi/read_ahead_kb}="524288"Permanently Mapping RBD
Configuration for automatic RBD connection, including LUKS encryption and filesystem mounting, along with proper disconnection (in reverse order) when the machine is switched off in a controlled manner.
RBD Image
Edit the configuration file located at /etc/ceph/rbdmap by adding the following lines.
# RbdDevice Parameters
#poolname/imagename id=client,keyring=/etc/ceph/ceph.client.keyring
pool_name/image_name id=rbd_user,keyring=/etc/ceph/ceph.keyringLUKS
Edit configuration file located at /etc/crypttab by adding the following lines.
rbd_luks_pool /dev/rbd/pool_name/image_name /etc/ceph/luks.keyfile luks,_netdevwhere /etc/ceph/luks.keyfile is a LUKS key.
Path to block device source device is generally /dev/rbd/$POOL/$IMAGE
fstab file
Edit the configuration file located at /etc/fstab by adding the following lines.
/dev/mapper/rbd_luks_pool /mnt/rbd_luks_pool btrfs defaults,noatime,auto,_netdev 0 0path to LUKS container (file system) is generally /dev/mapper/$LUKS_NAME, where $LUKS_NAME is defined in /etc/crypttab (like taget name)
Systemd Unit
Edit the configuration file located at /etc/systemd/system/systemd-cryptsetup@rbd_luks_pool.service.d/10-deps.conf by adding the following lines.
[Unit]
After=rbdmap.service
Requires=rbdmap.service
Before=mnt-rbd_luks_pool.mountIn one case, systemd units were used on Debian 10, but for some reason ceph-rbdmap.service was used instead of rbdmap.service (this must be adjusted in the lines After= and Requires=)
Manual Connection
If the systemd unit dependencies are correctly configured, it will performs the RBD mapping, unlock LUKS and mount all fs dependent on the rbdmap, as specified in the .mount unit needs (⇒ this will mount both images as described in the configuration).
systemctl start mnt-rbd_luks_pool.mountManual Disconnection
This command should execute correctly if the dependencies are set up properly, umount, close LUKS and unmap RBD.
systemctl stop rbdmap.service
(alternatively `systemctl stop ceph-rbdmap.service`)Image Resize
When resizing an encrypted image, you need to follow the correct order, with the key step being the command cryptsetup --verbose resize image_name.
rbd resize rbd_pool_name/image_name --size 200T
cryptsetup --verbose resize image_name
mount /storage/rbd/image_name
xfs_growfs /dev/mapper/image_nameLast updated on
