"warden_server/warden_server.py" did not exist on "6e54e21351d790fe9f2f761198b2e40d89fcafad"
Newer
Older
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
---
languages:
- en
- cs
---
# Connecting and configuring Ceph RBD using a Linux client
Ceph RBD (RADOS Block Device) provides users with a network block device that looks like a local disk on the system where it is connected. The block device is fully managed by the user. An user can create a file system there and use it according to his needs.
???+ note "Advantages of RBD"
* Possibility to enlarge the image of the block device.
* Import / export block device image.
* Stripping and replication within the cluster.
* Possibility to create read-only snapshots; restore snapshots (if you need snapshots on the RBD level you must contact us).
* Possibility to connect using Linux or QEMU KVM client
## Setup of RBD client (Linux)
!!! warning
To connect RBD, it is recommended to have a newer kernel version on your system. In lower kernel versions are the appropriate RBD connection modules deprecated. So not all advanced features are supported. Developers even recommend a kernel version at least 5.0 or higher. However developers has backported some functionalities to CentOS 7 core.
???+ note "Ceph client version"
For proper functioning it is highly desired to use the same version of Ceph tools as is the current version being operated on our clusters. Currently it is version 16 with the code name Pacific . So we will set up the appropriate repositories, see below.
### Centos setup
First, install the release.asc key for the Ceph repository.
sudo rpm --import 'https://download.ceph.com/keys/release.asc'
In the directory **/etc/yum.repos.d/** create a text file **ceph.repo** and fill in the record for Ceph instruments.
[ceph]
name=Ceph packages for $basearch
baseurl=https://download.ceph.com/rpm-nautilus/el7/$basearch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
Some packages from the Ceph repository also require third-party libraries for proper functioning, so add the EPEL repository.
CentOS 7
sudo yum install -y epel-release
CentOS 8
sudo dnf install -y epel-release
RedHat 7
sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
Finally, install the basic tools for Ceph which also include RBD support.
CentOS 7
sudo yum install ceph-common
On CentOS 8
sudo dnf install ceph-common
### Ubuntu/Debian setup
Ubuntu/Ceph includes all necessary packages natively. So you can just run following command.
sudo apt install ceph
Use the credentials which you received from the system administrator to configure and connect the RBD. These are the following:
* pool name: **rbd_vo_poolname**
* image name: **vo_name_username**
* keyring: **[client.rbd_user] key = key_hash ==**
In the directory **/etc/ceph/** create the text file **ceph.conf** with the following content.
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
???+ note "CL1 Data Storage"
[global]
fsid = 19f6785a-70e1-45e8-a23a-5cff0c39aa54
mon_host = [v2:78.128.244.33:3300,v1:78.128.244.33:6789],[v2:78.128.244.37:3300,v1:78.128.244.37:6789],[v2:78.128.244.41:3300,v1:78.128.244.41:6789]
auth_client_required = cephx
???+ note "CL2 Data Storage"
[global]
fsid = 3ea58563-c8b9-4e63-84b0-a504a5c71f76
mon_host = [v2:78.128.244.65:3300/0,v1:78.128.244.65:6789/0],[v2:78.128.244.69:3300/0,v1:78.128.244.69:6789/0],[v2:78.128.244.71:3300/0,v1:78.128.244.71:6789/0]
auth_client_required = cephx
???+ note "CL3 Data Storage"
[global]
fsid = b16aa2d2-fbe7-4f35-bc2f-3de29100e958
mon_host = [v2:78.128.244.240:3300/0,v1:78.128.244.240:6789/0],[v2:78.128.244.241:3300/0,v1:78.128.244.241:6789/0],[v2:78.128.244.242:3300/0,v1:78.128.244.242:6789/0]
auth_client_required = cephx
???+ note "CL4 Data Storage"
[global]
fsid = c4ad8c6f-7ef3-4b0e-873c-b16b00b5aac4
mon_host = [v2:78.128.245.29:3300/0,v1:78.128.245.29:6789/0] [v2:78.128.245.30:3300/0,v1:78.128.245.30:6789/0] [v2:78.128.245.31:3300/0,v1:78.128.245.31:6789/0]
auth_client_required = cephx
Further in the directory **/etc/ceph/** create the text file **ceph.keyring**. Then save in that file the keyring, see the example below.
[client.rbd_user]
key = sdsaetdfrterp+sfsdM3iKY5teisfsdXoZ5==
!!! warning
If the location of the files `ceph.conf` and `username.keyring` differs from the default directory **/etc/ceph/**, the corresponding paths must be specified during mapping. See below.
sudo rbd -c /home/username/ceph/ceph.conf -k /home/username/ceph/username.keyring --id rbd_user device map name_pool/name_image
Then check the connection in kernel messages.
dmesg
Now check the status of RBD.
sudo rbd device list | grep "name_image"
## Encrypting and creating a file system
The next step is to encrypt the mapped image. Use **cryptsetup-luks** for encryption.
sudo yum install cryptsetup-luks
Then it encrypts the device.
sudo cryptsetup -s 512 luksFormat --type luks2 /dev/rbdX
Finally, check the settings.
sudo cryptsetup luksDump /dev/rbdX
In order to perform further actions on an encrypted device, it must be decrypted first.
sudo cryptsetup luksOpen /dev/rbdX luks_rbdX
???+ note ""
We recommend using XFS instead of EXT4 for larger images or those they will need to be enlarged to more than 200TB over time, because EXT4 has a limit on the number of inodes.
Now create file system on the device, here is an example xfs.
sudo mkfs.xfs -K /dev/mapper/luks_rbdX
!!! warning
If you use XFS, do not use the nobarrier option while mounting, it could cause data loss!
Once the file system is ready, we can mount the device in a pre-created folder in /mnt/.
sudo mount /dev/mapper/luks_rbdX /mnt/rbd
## Ending work with RBD
Unmount the volume.
sudo umount /mnt/rbd/
Close the encrypted volume.
sudo cryptsetup luksClose /dev/mapper/luks_rbdX
Volume unmapping.
sudo rbd --id rbd_user device unmap /dev/rbdX/
???+ note ""
To get better performance choose appropriate size of read_ahead cache depends on your size of memory.
Example for 8GB:
echo 8388608 > /sys/block/rbd0/queue/read_ahead_kb
Example for 512MB:
echo 524288 > /sys/block/rbd0/queue/read_ahead_kb
To apply changes you have to unmap image and map it again.
The approach described above is not persistent (won't survive reboot). To do it persistent you have to add following line into “/etc/udev/rules.d/50-read-ahead-kb.rules” file.
# Setting specific kernel parameters for a subset of block devices (Ceph RBD)
KERNEL=="rbd[0-9]*", ENV{DEVTYPE}=="disk", ACTION=="add|change", ATTR{bdi/read_ahead_kb}="524288"
## Permanently mapping of RBD
Settings for automatic RBD connection, including LUKS encryption and mount filesystems. + proper disconnection (in reverse order) when the machine is switched off in a controlled manner.
### RBD image
Edit configuration file in the path `/etc/ceph/rbdmap` by inserting following lines.
# RbdDevice Parameters
#poolname/imagename id=client,keyring=/etc/ceph/ceph.client.keyring
pool_name/image_name id=rbd_user,keyring=/etc/ceph/ceph.keyring
### LUKS
Edit configuration file in the path `/etc/crypttab` by inserting following lines.
# <target name> <source device> <key file> <options>
rbd_luks_pool /dev/rbd/pool_name/image_name /etc/ceph/luks.keyfile luks,_netdev
where **/etc/ceph/luks.keyfile** is LUKS key.
???+ note ""
path to block device (“<source device>”) is generally `/dev/rbd/$POOL/$IMAGE`
### fstab file
Edit configuration file in the path `/etc/fstab` by inserting following lines.
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/rbd_luks_pool /mnt/rbd_luks_pool btrfs defaults,noatime,auto,_netdev 0 0
???+ note ""
path to LUKS container (“<file system>”) is generally `/dev/mapper/$LUKS_NAME`,
where `$LUKS_NAME` is defined in `/etc/crypttab` (like “<taget name>”)
### systemd unit
Edit configuration file in the path `/etc/systemd/system/systemd-cryptsetup@rbd_luks_pool.service.d/10-deps.conf` by inserting following lines.
[Unit]
After=rbdmap.service
Requires=rbdmap.service
Before=mnt-rbd_luks_pool.mount
???+ note ""
In one case, systemd units were used on Debian 10 for some reason `ceph-rbdmap.service` instead of `rbdmap.service` (must be adjusted to lines `After=` and `Requires=`)
----
### Manual connection
If the dependencies of the systemd units are correct, it performs an RBD map, unlocks LUKS and mounts all the automatic fs dependent on the rbdmap that the specified .mount unit needs (⇒ mounts both images in the described configuration).
systemctl start mnt-rbd_luks_pool.mount
### Manual disconnection
This command should execute if the dependencies are set correctly `umount`, LUKS `close` i RBD unmap.
systemctl stop rbdmap.service
(alternatively `systemctl stop ceph-rbdmap.service`)
### Resize
When resizing an encrypted image, you need to follow the order and the main one is the line with cryptsetup `--verbose resize image_name`.
rbd resize rbd_pool_name/image_name --size 200T
cryptsetup --verbose resize image_name
mount /storage/rbd/image_name
xfs_growfs /dev/mapper/image_name
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
############################################################################################################################
The Rados Block Device **RBD** is a block device that you can connect into your infrastructure. The connection must be done using a **Linux machine** (RBD connection to Windows is not yet implemented in reliable manner). Subsequently, you can re-export the connected block device anywhere within your systems (samba remount to your network). RBD is particularly suitable for use in centralized backup systems. RBD is a very specialized service that requires the user to have extensive experience in managing Linux devices. The service is intended for larger volumes of data - hundreds of TB. The block device can also be encrypted on your side (client side) using LUKS. Client-side encryption also means that the transmission of data over the network is encrypted, and in case of eavesdropping during transmission, the data cannot be decrypted. Access to the service is controlled by virtual organizations and coresponding groups.
!!! warning
RBD connection is only possible from dedicated IPv4 addresses that are enabled on the firewall in our Data Centers. An RBD image can only be subsequently mounted on **ONE** machine, it is not possible for each of your users to mount the same RBD on their workstation - having said that the RBD is not used as clustered file system. Usage of clustered file systems over RBD must first be consulted with Data Care support.
???+ note "How to get RBD service?"
To connect to RBD service you have to contact support at:
`support@cesnet.cz`
----
## RBD elementary use cases
In the following section you can find the description of elementary use cases related to RBD service.
### Large dataset backups requiring local filesystem
If you have a centralized backup system (script suite, bacula, BackupPC…) requiring local file system, then we recommend you to use [RBD service](rbd-setup.md), see the figure below. The RBD image can be connected directly to the machine where the central backup system is running, as a block device. RBD can then be equipped with snapshots, see service description, as protection against unwanted overwriting or ransomware attacks.
{ style="display: block; margin: 0 auto" }
### Centralized shared storage for internal redistribution
If you need to store live data and need to provide the storage for individual user, then you can use [RBD](rbd-setup.md) service which you can connect to you infrastructure using a Linux machine. You can create a file system on the connected block device, or equip it with encryption, and then re-export them inside your infrastructure using, for example, samba, NFS, ftp, ssh, etc. (also in the form of containers ensuring the distribution of protocols to your internal network). Client-side encryption also means that the data transmission over the network is encrypted and the data cannot be decrypted once the transmission is sent. The advantage is that you can create groups and manage rights according to your preferences, or use your local database of users and groups. The RBD block device can also be equipped with snapshots at the RBD level, so if data is accidentally deleted, it is possible to return to a snapshot from the previous day, for example.
{ style="display: block; margin: 0 auto" }
## RBD Data Reliability (Data Redundancy) - replicated vs erasure coding
In the section below are described additional aproaches for data redundancy applied to the object storage pool. RBD service can be equipped with **replicated** or **erasure code (EC)** redundancy or with **synchronous/asynchronous geographical repliacation**.
### Replicated
Your data is stored in three copies in the data center. In case one copy is corrupted, the original data is still readable in an undamaged form, and the damaged data is restored in the background. Using a service with the replicated flag also allows for faster reads, as it is possible to read from all replicas at the same time. Using a service with the replicated flag reduces write speed because the write operation waits for write confirmation from all three replicas.
???+ note "Suitable for?"
Suitable for smaller volumes of live data with a preference for reading speed (not very suitable for large data volumes).
### Erasure Coding (EC)
Erasure coding (EC) is a data protection method. It is similar to the dynamic RAID known from disk arrays. Erasure coding (EC) is a method where data is divided into individual fragments, which are then stored with some redundancy across the data storage. Therefore, if some disks (or the entire storage server) fail, the data is still accessible and will be restored in the background. So it is not possible for your data to be on one disk that gets damaged and you lose your data.
???+ note "Suitable for?"
Suitable, for example, for storing large data volumes.
### RBD snapshots
Snapshots can be used at the RBD (replicated/erasure coding) level. Snapshots are controlled from the client side. [RBD snapshotting](rbd-setup.md) is one of the replacement options for the `tape_tape` policy - snapshots mirrored to another geographic location, see below.
### Synchronous geographical replication
Synchronous geographical replication protects against data center failure. Synchronous geographical replication degrades write speed because the system waits for a successful write confirmation at both geographic locations. If you feel that you need this service, please contact us.
### Asynchronous geographical replication
Asynchronous geographical replication partially protects against data center failure (certain data may be lost between individual asynchronous synchronizations due to time lag). However, with an asynchronous geographical replication, in case of data corruption (ransomware), you can disrupt the replication and safe your data. If you feel that you need this service, please contact us.