Introduction
AKLWEB HOST Bare Metal Servers offer access to high-performance physical servers with no virtualization layer between the user and the server. The bare metal servers have no resource limits and allow using the hardware’s full potential for handling intensive workloads. The wide range of bare metal servers offered by AKLWEB HOST feature configurations that include different processors, memory, storage, and so on.
The Redundant Array of Individual Disks (RAID) is a storage virtualization method that allows you to combine multiple disks to give data redundancy and high performance. Using RAID, you can configure multiple disks attached to a server according to your workload and the priority between performance and data safety.
This guide explains the different disk configuration options offered with various AKLWEB HOST Bare Metal Servers. It also walks you through the overview of different levels of RAID arrays (0, 1, 5, 10) and the steps to create, mount, and delete a RAID array using the mdadm
utility for extra disks available in the bare metal server.
Prerequisites
- Deploy a AKLWEB HOST Bare Metal instance with Ubuntu 22.04.
Disk Configuration Options
AKLWEB HOST Bare Metal Servers with an Intel processor have two SSDs, and the servers with an AMD processor have the same 2 SSDs setup but with two additional high-speed NVMe SSDs. This section explains the different options you can choose for configuring the disks attached to the server while provisioning a new bare metal server.
The following are the available disk configuration options.
- RAID1 – Software RAID
- No RAID – Disks formatted/mounted
- No RAID – Extra disks unformatted
The RAID1 – Software RAID option combines the two main SSDs connected to the server together to form a RAID level 1 array. The RAID array mounts as a single storage device with the capacity of a single disk. It leaves the additional NVMe disks (if any) unmounted and unformatted.
The No RAID – Disks formatted/mounted formats the 2 main SSDs connected to the server and uses one of them as the boot volume, and mounts the other one as an empty mounted disk. It leaves the additional NVMe disks (if any) unmounted and unformatted.
The No RAID – Extra disks unformatted only formats one of the two main SSDs connected to the server to use as the boot volume and leaves all other disks, including additional NVMe disks (if any), unmounted and unformatted.
Overview of RAID
The Redundant Array of Individual Disks (RAID) is a storage virtualization method that combines multiple disks in an array to achieve higher performance, higher redundancy, or both. A RAID array appears as a single disk to the Operating System (OS), and you can interact with it just like a normal storage disk. It works by placing data on multiple disks that allow input and output operations to overlap. Different RAID levels use different methods to split the data among the disks in the array.
The methods used to split data in a RAID array are disk mirroring and disk striping. The disk mirroring method works by placing the same copy of data on more than one disk so that if one disk fails, the data remains untampered, thus achieving data redundancy. The disk striping method spreads data blocks across more than one disk, achieving higher capacity and performance. Still, the absence of redundancy makes it fault-intolerant without parity.
The following are the levels of a RAID array.
- RAID0
- RAID1
- RAID5
- RAID10
The RAID0 arrays use the data striping method to merge all the attached disks together, providing combined capacity and high-speed read operations. It focuses on performance and storage. It requires a minimum of two disks.
The RAID1 arrays use the data mirroring method to merge all the attached disks together while providing the capacity of a single disk. The other disks attached to the array contain a copy of the same data as the first disk. It focuses on data redundancy and requires a minimum of two disks.
The RAID5 arrays use the data striping method with parity disks. The parity disks contain partial data blocks to withstand disk failure. It provides combined capacity, excluding the parity disk and high-speed read operations. It focuses on performance, storage, and redundancy. It requires a minimum of three disks.
The RAID10 arrays use a combination of data striping and mirroring methods. The attached disks divide into two different RAID level 1 arrays that together form a RAID level 0 array providing high-speed read operations but only half the capacity of the total sum of attached disks. It focuses on performance and redundancy. It requires a minimum of four disks.
A single bare metal server can have multiple RAID arrays, given that it has enough available disks. You can select the level of RAID arrays according to your priority between performance and security. If your use case requires high redundancy, you may go with RAID1 or RAID10. If your use case requires high performance, you may go with RAID0 or RAID5.
Create a RAID Array
As explained in the previous section, the RAID arrays create a logical storage device that allows an array of individual disks to act as a single disk. The execution of RAID is only possible in the existence of a RAID controller, which does all the underlying work. The Linux operating system has the capability of creating a software RAID, and you can use the mdadm
utility to manage and monitor RAID arrays using built-in software RAID capabilities.
Note: You can not follow the steps to set up a RAID array on a AKLWEB HOST Bare Metal Server, which features an Intel processor, as they do not have enough disks available to configure a RAID array.
You can create a RAID array using the mdadm --create
command. The following is an example command for creating a RAID1 array with two disks.
# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb
The above command uses the --create
flag to define the operation type followed by the path to create the logical storage device. You can define the desired level of the RAID array using the --level
flag and use the --raid-devices
to define the number of disks followed by the path of the storage disks.
You can use the same command to configure any RAID array level, including RAID0, RAID1, RAID5, RAID10, and so on.
Fetch the disks attached to the server.
# lsblk
The above command shows all the block devices connected to the server. You can identify and select the disks for forming a RAID array from the output.
Output.
sda 8:0 0 447.1G 0 disk
└─sda1 8:1 0 447.1G 0 part /
sdb 8:16 0 447.1G 0 disk
nvme1n1 259:0 0 1.5T 0 disk
nvme3n1 259:1 0 1.5T 0 disk
nvme0n1 259:2 0 1.5T 0 disk
nvme4n1 259:3 0 1.5T 0 disk
nvme5n1 259:4 0 1.5T 0 disk
nvme2n1 259:5 0 1.5T 0 disk
Create a RAID array.
# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/nvme0n1 /dev/nvme1n1
The above command creates a RAID1 array using 2 additional NVMe disks.
Fetch RAID array details.
# mdadm --detail /dev/md0
The above command shows the details about the RAID array associated with the specified logical storage device.
Output.
/dev/md0:
Version : 1.2
Creation Time : Thu Oct 6 23:00:21 2022
Raid Level : raid1
Array Size : 1562681664 (1490.29 GiB 1600.19 GB)
Used Dev Size : 1562681664 (1490.29 GiB 1600.19 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Thu Oct 6 23:06:07 2022
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : bitmap
Name : guest:1 (local to host guest)
UUID : b1285d12:87b7edf9:4528868a:d754a8db
Events : 1
Number Major Minor RaidDevice State
0 259 3 0 active sync /dev/nvme0n1
1 259 4 1 active sync /dev/nvme1n1
Fetch the logical block device.
# lsblk /dev/md0
Output.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
md0 9:0 0 1.5T 0 raid1
The output confirms the size and the level of the RAID array and shows that it is not yet mounted to the server. The next section demonstrates the steps to mount the RAID array to the server.
Mount a RAID Array
Creating a RAID array forms a logical storage device. This device is not persistent by default and does not contain any filesystem by default. This section explains the steps to make the RAID array persistent, format it with a filesystem, and mount it to the server.
Update the mdadm
configuration file.
# /usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf
The above command executes the mkconf
script and overwrites the mdadm
configuration file with the output. The script examines the system status and creates a configuration containing all the RAID arrays. If you do not wish to regenerate the whole configuration, you can manually edit the mdadm.conf
using a text editor to add or remove the RAID array entries.
Update the initial RAM filesystem.
# update-initramfs -u
The above command updates the initial RAM filesystem. It ensures the availability of RAID arrays mentioned in the mdadm.conf
during the early boot process.
Format the RAID array.
# mkfs.xfs /dev/md0
The above command formats the logical storage device with the xfs
filesystem.
Create a new directory.
# mkdir /mnt/md0
The above command creates a new directory in the /mnt
directory named md0
, which is a mounting point for the logical storage device.
Edit the filesystem table file.
# nano /etc/fstab
Add the following content to the file and save the file using CTRL + X then ENTER.
/dev/md0 /mnt/md0 xfs defaults 0 0
The above configuration maps the /dev/md0
logical storage device to the /mnt/md0
mount point. It ensures that the array gets mounted during the boot process.
Mount the RAID array.
# mount -a
The above command reads the filesystem table configuration file to mount all the disks.
Verify the mounted RAID array.
# lsblk /dev/md0
Output.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
md0 9:0 0 1.5T 0 raid1 /mnt/md0
Replace a Failed Disk in a RAID Array
The individual disks in the RAID array might fail and cause a loss of redundancy. It would be best if you replaced the failed disks as soon as the failure occurs to keep the RAID array healthy. This section explains the steps to mark a failed disk and add it to a RAID array. You can not replace a failed disk in a RAID0
environment as the array is not redundant and will fail as soon as a disk failure occurs.
Information: You can manually trigger a disk failure to test your RAID configuration. Refer to the RAID documentation overview for more information.
Mark the disk as failed.
# mdadm /dev/md0 --fail /dev/nvme1n1
The above command marks the specified disk as failed. Marking a failed disk allows the array to rebuild itself using existing spare disks or a new disk added to the array to restore to a healthy state.
Remove the disk from the array.
# mdadm /dev/md0 --remove /dev/nvme1n1
The above command removes the specified disk from the array. If you do not have extra disks attached to the server, you can shut down the server and replace the failed disk with a new disk to continue with the next steps.
Add the new disk to the array.
# mdadm /dev/md0 --add /dev/nvme2n1
The above command adds the specified disk to the array. The RAID array uses the new disk to rebuild itself and restore it to a healthy state.
Verify the update.
# mdadm --detail /dev/md0
You can also monitor the rebuilding process using the watch mdadm --detail /dev/md0
command. The total time to complete the rebuilding process depends on the size and the speed of the disks.
Add a Spare Disk in a RAID Array
You can add spare disks in redundant arrays such as RAID1, RAID5, and RAID10. The spare disk takes the place of a failed disk as soon as disk failure occurs to keep the RAID in a healthy state. This section explains the steps to add a spare disk to a RAID array. However, you can not add a spare disk in a RAID0
environment as the array is not redundant and will fail as soon as a disk failure occurs.
Information: You can also increase the size of your RAID array with the
--grow
flag. Refer to the RAID documentation overview for more information.
Add a spare disk to the RAID array.
# mdadm /dev/md0 --add /dev/nvme3n1
Verify the update.
# mdadm --details /dev/md0
Delete a RAID Array
You can repurpose the individual disks added to a RAID array when the array is no longer required by deleting it and removing the superblock from all the associated disks.
Stop and remove the RAID array.
# mdadm --stop /dev/md0
# mdadm --remove /dev/md0
The above commands stop and remove the specified RAID array from the server.
Update the mdadm
configuration file.
# /usr/share/mdadm/mkconf > /etc/mdadm/mdadm.conf
The above command regenerates the mdadm
configuration that does not include the deleted RAID array.
Update the initial RAM filesystem.
# update-initramfs -u
Using a text editor, remove the entry from the filesystem table file.
# nano /etc/fstab
Remove superblock from associated disks.
# mdadm --zero-superblock /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1
The mdadm
utility uses the superblock header to assemble and manage the disk as a part of an array. The above command removes the superblock header from the specified storage devices.
Conclusion
You learned the different disk configuration options offered with various AKLWEB HOST Bare Metal Servers and the overview of RAID arrays. You also performed the steps to create, mount and delete a RAID array. This information helps you implement RAID technology on your bare metal server. Refer to the mdadm
documentation for more information about configuring software RAID arrays. If you plan to use a RAID array in a production environment, you should set up email notifications to get alerted in case of a disk failure. Refer to the RAID documentation overview for more Infomation.