Connecting and configuring Ceph RBD using a Linux client
Ceph RBD (RADOS Block Device) provides users with a network block device that looks like a local disk on the system where it is connected. The block device is fully managed by the user. An user can create a file system there and use it according to his needs.
Advantages of RBD
- Possibility to enlarge the image of the block device.
- Import / export block device image.
- Stripping and replication within the cluster.
- Possibility to create read-only snapshots; restore snapshots (if you need snapshots on the RBD level you must contact us).
- Possibility to connect using Linux or QEMU KVM client
Setup of RBD client (Linux)
To connect RBD, it is recommended to have a newer kernel version on your system. In lower kernel versions are the appropriate RBD connection modules deprecated. So not all advanced features are supported. Developers even recommend a kernel version at least 5.0 or higher. However developers has backported some functionalities to CentOS 7 core.
Ceph client version For proper functioning it is highly desired to use the same version of Ceph tools as is the current version being operated on our clusters. Currently it is version 16 with the code name Pacific . So we will set up the appropriate repositories, see below.
CentOS setup
First, install the release.asc key for the Ceph repository.
In the directory /etc/yum.repos.d/ create a text file ceph.repo and fill in the record for Ceph instruments.
Some packages from the Ceph repository also require third-party libraries for proper functioning, so add the EPEL repository.
CentOS 7
CentOS 8
RedHat 7
Finally, install the basic tools for Ceph which also include RBD support.
CentOS 7
On CentOS 8
Ubuntu/Debian setup
Ubuntu/Ceph includes all necessary packages natively. So you can just run following command.
RBD configuration and its mapping
Use the credentials which you received from the system administrator to configure and connect the RBD. These are the following:
- pool name: rbd_vo_poolname
- image name: vo_name_username
- keyring: [client.rbd_user] key = key_hash ==
In the directory /etc/ceph/ create the text file ceph.conf with the following content.
CL1 Data Storage
CL2 Data Storage
CL3 Data Storage
CL4 Data Storage
Further in the directory /etc/ceph/ create the text file ceph.keyring. Then save in that file the keyring, see the example below.
If the location of the files ceph.conf
and username.keyring
differs from the default directory /etc/ceph/, the corresponding paths must be specified during mapping. See below.
If the location of the files ceph.conf
and username.keyring
differs from the default directory /etc/ceph/, the corresponding paths must be specified during mapping. See below.
Then check the connection in kernel messages.
Now check the status of RBD.
Encrypting and creating a file system
The next step is to encrypt the mapped image. Use cryptsetup-luks for encryption.
Then it encrypts the device.
Finally, check the settings.
In order to perform further actions on an encrypted device, it must be decrypted first.
We recommend using XFS instead of EXT4 for larger images or those they will need to be enlarged to more than 200TB over time, because EXT4 has a limit on the number of inodes.
Now create file system on the device, here is an example xfs.
If you use XFS, do not use the nobarrier option while mounting, it could cause data loss!
Once the file system is ready, we can mount the device in a pre-created folder in /mnt/.
Ending work with RBD
Unmount the volume.
Close the encrypted volume.
Volume unmapping.
To get better performance choose appropriate size of read_ahead
cache depends on your size of memory.
Example for 8GB:
Example for 512MB:
To apply changes you have to unmap image and map it again.
The approach described above is not persistent (won’t survive reboot). To do it persistent you have to add following line into “/etc/udev/rules.d/50-read-ahead-kb.rules” file.
You can set specific kernel parameters for a subset of block devices (Ceph RBD)
Permanently mapping of RBD
Settings for automatic RBD connection, including LUKS encryption and mount filesystems. + proper disconnection (in reverse order) when the machine is switched off in a controlled manner.
RBD image
Edit configuration file in the path /etc/ceph/rbdmap
by inserting following lines.
LUKS
Edit configuration file in the path /etc/crypttab
by inserting following lines.
where /etc/ceph/luks.keyfile
is LUKS key.
Path to block device source device
is generally /dev/rbd/$POOL/$IMAGE
fstab file
Edit configuration file in the path /etc/fstab
by inserting following lines.
path to LUKS container (file system
) is generally /dev/mapper/$LUKS_NAME
, where $LUKS_NAME
is defined in /etc/crypttab
(like taget name
)
systemd unit
Edit configuration file in the path /etc/systemd/system/systemd-cryptsetup@rbd_luks_pool.service.d/10-deps.conf
by inserting following lines.
In one case, systemd units were used on Debian 10 for some reason ceph-rbdmap.service
instead of rbdmap.service
(must be adjusted to lines After=
and Requires=
)
Manual connection
If the dependencies of the systemd units are correct, it performs an RBD map, unlocks LUKS and mounts all the automatic fs dependent on the rbdmap that the specified .mount unit needs (⇒ mounts both images in the described configuration).
Manual disconnection
This command should execute if the dependencies are set correctly umount
, LUKS close
i RBD unmap.
Image resize
When resizing an encrypted image, you need to follow the order and the main one is the line with cryptsetup --verbose resize image_name
.
Last updated on