Skip to content
Snippets Groups Projects
Commit 1c2261a3 authored by Milan's avatar Milan
Browse files

RBD setup + cvorrections

parent e54e8d33
No related branches found
No related tags found
No related merge requests found
...@@ -21,7 +21,7 @@ Ceph RBD (RADOS Block Device) provides users with a network block device that lo ...@@ -21,7 +21,7 @@ Ceph RBD (RADOS Block Device) provides users with a network block device that lo
???+ note "Ceph client version" ???+ note "Ceph client version"
For proper functioning it is highly desired to use the same version of Ceph tools as is the current version being operated on our clusters. Currently it is version 16 with the code name Pacific . So we will set up the appropriate repositories, see below. For proper functioning it is highly desired to use the same version of Ceph tools as is the current version being operated on our clusters. Currently it is version 16 with the code name Pacific . So we will set up the appropriate repositories, see below.
### Centos setup ### CentOS setup
First, install the release.asc key for the Ceph repository. First, install the release.asc key for the Ceph repository.
sudo rpm --import 'https://download.ceph.com/keys/release.asc' sudo rpm --import 'https://download.ceph.com/keys/release.asc'
...@@ -76,27 +76,27 @@ Use the credentials which you received from the system administrator to configur ...@@ -76,27 +76,27 @@ Use the credentials which you received from the system administrator to configur
In the directory **/etc/ceph/** create the text file **ceph.conf** with the following content. In the directory **/etc/ceph/** create the text file **ceph.conf** with the following content.
???+ note "CL1 Data Storage" ???+ note "CL1 Data Storage"
[global] [global]<br/>
fsid = 19f6785a-70e1-45e8-a23a-5cff0c39aa54 fsid = 19f6785a-70e1-45e8-a23a-5cff0c39aa54<br/>
mon_host = [v2:78.128.244.33:3300,v1:78.128.244.33:6789],[v2:78.128.244.37:3300,v1:78.128.244.37:6789],[v2:78.128.244.41:3300,v1:78.128.244.41:6789] mon_host = [v2:78.128.244.33:3300,v1:78.128.244.33:6789],[v2:78.128.244.37:3300,v1:78.128.244.37:6789],[v2:78.128.244.41:3300,v1:78.128.244.41:6789]<br/>
auth_client_required = cephx auth_client_required = cephx
???+ note "CL2 Data Storage" ???+ note "CL2 Data Storage"
[global] [global]<br/>
fsid = 3ea58563-c8b9-4e63-84b0-a504a5c71f76 fsid = 3ea58563-c8b9-4e63-84b0-a504a5c71f76<br/>
mon_host = [v2:78.128.244.65:3300/0,v1:78.128.244.65:6789/0],[v2:78.128.244.69:3300/0,v1:78.128.244.69:6789/0],[v2:78.128.244.71:3300/0,v1:78.128.244.71:6789/0] mon_host = [v2:78.128.244.65:3300/0,v1:78.128.244.65:6789/0],[v2:78.128.244.69:3300/0,v1:78.128.244.69:6789/0],[v2:78.128.244.71:3300/0,v1:78.128.244.71:6789/0]<br/>
auth_client_required = cephx auth_client_required = cephx
???+ note "CL3 Data Storage" ???+ note "CL3 Data Storage"
[global] [global]<br/>
fsid = b16aa2d2-fbe7-4f35-bc2f-3de29100e958 fsid = b16aa2d2-fbe7-4f35-bc2f-3de29100e958<br/>
mon_host = [v2:78.128.244.240:3300/0,v1:78.128.244.240:6789/0],[v2:78.128.244.241:3300/0,v1:78.128.244.241:6789/0],[v2:78.128.244.242:3300/0,v1:78.128.244.242:6789/0] mon_host = [v2:78.128.244.240:3300/0,v1:78.128.244.240:6789/0],[v2:78.128.244.241:3300/0,v1:78.128.244.241:6789/0],[v2:78.128.244.242:3300/0,v1:78.128.244.242:6789/0]<br/>
auth_client_required = cephx auth_client_required = cephx
???+ note "CL4 Data Storage" ???+ note "CL4 Data Storage"
[global] [global]<br/>
fsid = c4ad8c6f-7ef3-4b0e-873c-b16b00b5aac4 fsid = c4ad8c6f-7ef3-4b0e-873c-b16b00b5aac4<br/>
mon_host = [v2:78.128.245.29:3300/0,v1:78.128.245.29:6789/0] [v2:78.128.245.30:3300/0,v1:78.128.245.30:6789/0] [v2:78.128.245.31:3300/0,v1:78.128.245.31:6789/0] mon_host = [v2:78.128.245.29:3300/0,v1:78.128.245.29:6789/0] [v2:78.128.245.30:3300/0,v1:78.128.245.30:6789/0] [v2:78.128.245.31:3300/0,v1:78.128.245.31:6789/0]<br/>
auth_client_required = cephx auth_client_required = cephx
Further in the directory **/etc/ceph/** create the text file **ceph.keyring**. Then save in that file the keyring, see the example below. Further in the directory **/etc/ceph/** create the text file **ceph.keyring**. Then save in that file the keyring, see the example below.
...@@ -163,11 +163,11 @@ Volume unmapping. ...@@ -163,11 +163,11 @@ Volume unmapping.
sudo rbd --id rbd_user device unmap /dev/rbdX/ sudo rbd --id rbd_user device unmap /dev/rbdX/
???+ note "" ???+ note ""
To get better performance choose appropriate size of read_ahead cache depends on your size of memory. To get better performance choose appropriate size of `read_ahead` cache depends on your size of memory.
Example for 8GB: Example for 8GB:<br/>
echo 8388608 > /sys/block/rbd0/queue/read_ahead_kb echo 8388608 > /sys/block/rbd0/queue/read_ahead_kb
Example for 512MB: Example for 512MB:<br/>
echo 524288 > /sys/block/rbd0/queue/read_ahead_kb echo 524288 > /sys/block/rbd0/queue/read_ahead_kb
To apply changes you have to unmap image and map it again. To apply changes you have to unmap image and map it again.
...@@ -240,57 +240,3 @@ When resizing an encrypted image, you need to follow the order and the main one ...@@ -240,57 +240,3 @@ When resizing an encrypted image, you need to follow the order and the main one
cryptsetup --verbose resize image_name cryptsetup --verbose resize image_name
mount /storage/rbd/image_name mount /storage/rbd/image_name
xfs_growfs /dev/mapper/image_name xfs_growfs /dev/mapper/image_name
############################################################################################################################
The Rados Block Device **RBD** is a block device that you can connect into your infrastructure. The connection must be done using a **Linux machine** (RBD connection to Windows is not yet implemented in reliable manner). Subsequently, you can re-export the connected block device anywhere within your systems (samba remount to your network). RBD is particularly suitable for use in centralized backup systems. RBD is a very specialized service that requires the user to have extensive experience in managing Linux devices. The service is intended for larger volumes of data - hundreds of TB. The block device can also be encrypted on your side (client side) using LUKS. Client-side encryption also means that the transmission of data over the network is encrypted, and in case of eavesdropping during transmission, the data cannot be decrypted. Access to the service is controlled by virtual organizations and coresponding groups.
!!! warning
RBD connection is only possible from dedicated IPv4 addresses that are enabled on the firewall in our Data Centers. An RBD image can only be subsequently mounted on **ONE** machine, it is not possible for each of your users to mount the same RBD on their workstation - having said that the RBD is not used as clustered file system. Usage of clustered file systems over RBD must first be consulted with Data Care support.
???+ note "How to get RBD service?"
To connect to RBD service you have to contact support at:
`support@cesnet.cz`
----
## RBD elementary use cases
In the following section you can find the description of elementary use cases related to RBD service.
### Large dataset backups requiring local filesystem
If you have a centralized backup system (script suite, bacula, BackupPC…) requiring local file system, then we recommend you to use [RBD service](rbd-setup.md), see the figure below. The RBD image can be connected directly to the machine where the central backup system is running, as a block device. RBD can then be equipped with snapshots, see service description, as protection against unwanted overwriting or ransomware attacks.
![](rbd-service-screenshots/central_backup.png){ style="display: block; margin: 0 auto" }
### Centralized shared storage for internal redistribution
If you need to store live data and need to provide the storage for individual user, then you can use [RBD](rbd-setup.md) service which you can connect to you infrastructure using a Linux machine. You can create a file system on the connected block device, or equip it with encryption, and then re-export them inside your infrastructure using, for example, samba, NFS, ftp, ssh, etc. (also in the form of containers ensuring the distribution of protocols to your internal network). Client-side encryption also means that the data transmission over the network is encrypted and the data cannot be decrypted once the transmission is sent. The advantage is that you can create groups and manage rights according to your preferences, or use your local database of users and groups. The RBD block device can also be equipped with snapshots at the RBD level, so if data is accidentally deleted, it is possible to return to a snapshot from the previous day, for example.
![](rbd-service-screenshots/shared_distribution.png){ style="display: block; margin: 0 auto" }
## RBD Data Reliability (Data Redundancy) - replicated vs erasure coding
In the section below are described additional aproaches for data redundancy applied to the object storage pool. RBD service can be equipped with **replicated** or **erasure code (EC)** redundancy or with **synchronous/asynchronous geographical repliacation**.
### Replicated
Your data is stored in three copies in the data center. In case one copy is corrupted, the original data is still readable in an undamaged form, and the damaged data is restored in the background. Using a service with the replicated flag also allows for faster reads, as it is possible to read from all replicas at the same time. Using a service with the replicated flag reduces write speed because the write operation waits for write confirmation from all three replicas.
???+ note "Suitable for?"
Suitable for smaller volumes of live data with a preference for reading speed (not very suitable for large data volumes).
### Erasure Coding (EC)
Erasure coding (EC) is a data protection method. It is similar to the dynamic RAID known from disk arrays. Erasure coding (EC) is a method where data is divided into individual fragments, which are then stored with some redundancy across the data storage. Therefore, if some disks (or the entire storage server) fail, the data is still accessible and will be restored in the background. So it is not possible for your data to be on one disk that gets damaged and you lose your data.
???+ note "Suitable for?"
Suitable, for example, for storing large data volumes.
### RBD snapshots
Snapshots can be used at the RBD (replicated/erasure coding) level. Snapshots are controlled from the client side. [RBD snapshotting](rbd-setup.md) is one of the replacement options for the `tape_tape` policy - snapshots mirrored to another geographic location, see below.
### Synchronous geographical replication
Synchronous geographical replication protects against data center failure. Synchronous geographical replication degrades write speed because the system waits for a successful write confirmation at both geographic locations. If you feel that you need this service, please contact us.
### Asynchronous geographical replication
Asynchronous geographical replication partially protects against data center failure (certain data may be lost between individual asynchronous synchronizations due to time lag). However, with an asynchronous geographical replication, in case of data corruption (ransomware), you can disrupt the replication and safe your data. If you feel that you need this service, please contact us.
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment