RAID level 5 uses striping, which means, the data is spread across number of disks used in the array, and also provides redundancy with the help of distributed parity. RAID 5 is the best cost effective solution for both performance and redundancy. Minimum number of disks required for raid 5 is 3 disk. One important part in RAID5 is that the reading rate is much better than writing. And this is due to the parity overhead. It is so simple to configure RAID5 and this article covers the method to configure RAID5 in CentOS 7.
What is Parity?
RAID 5 is a type of RAID that offers redundancy using a technique known as “parity”. Parity is a type of extra data that is calculated and stored alongside the data the user wants to write to the hard drive. This extra data can be used to verify the integrity of stored data, and also to calculate any “missing” data if some of your data cannot be read (such as when a drive fails).
Hot Spare :-
A hot spare is used as a fail-over mechanism to provide reliability in system configurations. When a Hard disk is fails, the hot spare Hard disk will switched into operation.
Hot Swap :-
Hot swapping is a term used to describe the ability to replace a failed disk drive without rebooting the machine.
Before you begin with the process, you need to check the disk availability by making use of the following command.
Once you’ve checked the disk availability, you need to install the mdadm package as it is very essential for the RAID configuration.
[root@server2 ~]# yum install mdadm -y Loaded plugins: fastestmirror, langpacks Loading mirror speeds from cached hostfile * base: centos.myfahim.com * extras: centos.myfahim.com * updates: centos.myfahim.com . . . Verifying : mdadm-4.0-5.el7.x86_64 1/2 Verifying : mdadm-3.2.6-31.el7.x86_64 2/2 Updated: mdadm.x86_64 0:4.0-5.el7 Complete!
And then, check whether if the block details are configured already by making use of the following command.
[root@server2 Desktop]# mdadm -E /dev/sd[b-f]1 mdadm: No md superblock detected on /dev/sdb1. mdadm: No md superblock detected on /dev/sdc1. mdadm: No md superblock detected on /dev/sdd1. mdadm: No md superblock detected on /dev/sde1. mdadm: No md superblock detected on /dev/sdf1.
Later, you need to create md device. Run the following command which has the raid levels (raid partition) along with the no. of devices and their names. After it, those disks will be added.
[root@server2 Desktop]# mdadm --create /dev/md5 --level=5 --raid-devices=3 /dev/sd[b-d]1 mdadm: Defaulting to version 1.2 metadata mdadm: array /dev/md5 started.
After that, you need to verify if the RAID devices' level is same as your configuration.
[root@server2 Desktop]# mdadm -D /dev/md5
Next, you should create file system for raid devices. It is essential for mounting the RAID device. Run the following command for the same purpose.
[root@server2 Desktop]# mkfs.ext4 /dev/md5
And then, you need to create a mount point directory named raid5, which shall be done by using mkdir command.
[root@server2 Desktop]# mkdir /raid5
Next, you should permanently mount the RAID using UUID of that device. So run the following command which generates its UUID.
[root@server2 raid1]# blkid /dev/md5 /dev/md5: UUID="3a27f241-d7c2-4e56-893e-93042ae62398" TYPE="ext4"
And, use that UUID along with the mount point to permanently mount that RAID device.
[root@server2 raid1]# vim /etc/fstab UUID=3a27f241-d7c2-4e56-893e-93042ae62398 /raid5 ext4 defaults 0 0
Execute mount command and check the status of mounted RAID device.
[root@server2 ~]# mount -a [root@server2 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 15G 4.6G 11G 31% / devtmpfs 741M 0 741M 0% /dev tmpfs 749M 140K 749M 1% /dev/shm tmpfs 749M 8.9M 741M 2% /run tmpfs 749M 0 749M 0% /sys/fs/cgroup /dev/sda1 497M 116M 382M 24% /boot /dev/md5 9.8G 37M 9.2G 1% /raid5
Now, go to the mount point and create a file and directory in it. So, when you list the files in that mont point, you'll see the file name and the directory in it.
[root@server2 ~]# cd /raid5/ [root@server2 ~]# mkdir dir1 [root@server2 ~]# touch fail.txt [root@server2 raid5]# ls -l total 24 drwxr-xr-x. 2 root root 4096 Nov 20 13:20 dir1 -rw-r--r--. 1 root root 25 Nov 20 13:20 fail.txt
You should now check the fault tolerance by adding the spare disk in RAID5 device, and for that, you need to use the following command.
[root@server2 ~]# mdadm --manage --add /dev/md5 /dev/sde1
Later, you need to check the availability of spare disk as follows. The information on those devices are listed as outut.
[root@server2 ~]# mdadm -D /dev/md5
Now, you need to test the fault tolerance by manually failing a device. And for that you shall run the following command which has the name of the device that is to be failed.
[root@server2 ~]# mdadm --manage --fail /dev/md5 /dev/sdc1 mdadm: set /dev/sdc1 faulty in /dev/md5
You can also check the status of that failed device.
[root@server2 ~]# mdadm -D /dev/md5
After completed rebuilding process, you can go to the mount point and check the data’s avilability.
[root@server2 ~]# cd /raid5/ [root@server2 raid5]# ls -l total 24 drwxr-xr-x. 2 root root 4096 Nov 20 13:20 dir1 -rw-r--r--. 1 root root 25 Nov 20 13:20 fail.txt
Finally, don't forget to save the raid configuration.
[root@server2 raid5]# mdadm --detail --scan --verbose >> /etc/mdadm.conf
With this, the method to configure Raid5 on CentOS 7 comes to an end.
Thank you! for using Linux Help.
You find this tutorial helpful? Share with your friends to keep it alive.
For more help topics browse our website www.linuxhelp.com
Be the first to comment, we value your suggestions. For further queries please comment below.