How to configure and test Raid 10 on Debian 11.3

To Configure And Test Raid 10 on Debian 11.3

Introduction:

RAID 10, also known as RAID 1+0, is a RAID configuration that combines disk mirroring and disk striping to protect data that requires the least number of four disks and stripes data across mirrored pairs. This tutorial will explain Raid 10 Configuration and testing on Debian 11.3.

Configuration Process:

Step 1: Check the version of the OS by using the below command

root@LinuxHelp: ~# lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 11 (bullseye)
Release:        11
Codename:       bullseye
Now list the disk by executing the following command

Step 2: Install Prerequisites by using the below command

root@linuxhelp:~# apt-get install mdadm
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following additional packages will be installed:
  exim4-base exim4-config exim4-daemon-light gsasl-common libgnutls-dane0
  libgnutls30 libgsasl7 libmailutils7 libntlm0 mailutils mailutils-common
Suggested packages:
  exim4-doc-html | exim4-doc-info eximon4 spf-tools-perl swaks gnutls-bin
  mailutils-mh mailutils-doc dracut-core
The following NEW packages will be installed:
  exim4-base exim4-config exim4-daemon-light gsasl-common libgnutls-dane0
  libgsasl7 libmailutils7 libntlm0 mailutils mailutils-common mdadm
The following packages will be upgraded:
  libgnutls30
1 upgraded, 11 newly installed, 0 to remove and 66 not upgraded.
Need to get 5,671 kB/7,012 kB of archives.
After this operation, 12.6 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://deb.debian.org/debian bullseye/main amd64 mdadm amd64 4.1-11 [457 k                                                                                                                      B]
Get:2 http://security.debian.org/debian-security bullseye-security/main amd64 li                                                                                                                    bgnutls-dane0 amd64 3.7.1-5+deb11u2 [395 kB]

Step 3: Now list the disk by executing the below command

root@linuxhelp:~# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   60G  0 disk
├─sda1   8:1    0   59G  0 part /
├─sda2   8:2    0    1K  0 part
└─sda5   8:5    0  975M  0 part [SWAP]
sdb      8:16   0    2G  0 disk
sdc      8:32   0    2G  0 disk
sdd      8:48   0    2G  0 disk
sde      8:64   0    2G  0 disk
sdf      8:80   0    2G  0 disk
sr0     11:0    1 1024M  0 rom

Step 4: Now I will create partition of SDB, SDC, SDD, SDE, SDF disk by executing the below command

root@linuxhelp:~# sudo fdisk /dev/sda
Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x4348c9fd.
Press n to create partition
Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Press p to choose the partition type
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-41943039, default 2048): 
Enter the size of the partition
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-41943039, default 41943039): +1G
Created a new partition 1 of type 'Linux' and of size 2 GiB.

Press t to change the partition code
Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): fd
Changed type of partition 'Linux' to 'Linux raid autodetect'.
Press w to write the partition
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Step 5: Then follow the same process to create partition to other drives by using the below command

root@linuxhelp:~# sudo fdisk /dev/sdb 
Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
.
.
.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

root@linuxhelp:~# sudo fdisk /dev/sdc
Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
.
.
.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

root@linuxhelp:~# sudo fdisk /dev/sdd
Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
.
.
.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

root@linuxhelp:~#sudo  fdisk /dev/sde
Welcome to fdisk (util-linux 2.34).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
.
.
.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Step 6: After the partition is created then list the disk. If the partition is created or not by using the below command

root@linuxhelp:~# lsblk
 NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   60G  0 disk
├─sda1   8:1    0   59G  0 part /
├─sda2   8:2    0    1K  0 part
└─sda5   8:5    0  975M  0 part [SWAP]
sdb      8:16   0    2G  0 disk
└─sdb1   8:17   0    2G  0 part
sdc      8:32   0    2G  0 disk
└─sdc1   8:33   0    2G  0 part
sdd      8:48   0    2G  0 disk
└─sdd1   8:49   0    2G  0 part
sde      8:64   0    2G  0 disk
└─sde1   8:65   0    2G  0 part
sdf      8:80   0    2G  0 disk
└─sdf1   8:81   0    2G  0 part
sr0     11:0    1 1024M  0 rom

Step 7: After the partition is created then Create RAID using those partitions by using the below command

root@linuxhelp:~# sudo mdadm -C /dev/md0 -l 10 -n 4 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Step 8: After RAID created now format the raid to ext4 file system

root@linuxhelp:~# sudo mkfs.ext4 /dev/md0
mke2fs 1.46.2 (28-Feb-2021)
Creating filesystem with 1047040 4k blocks and 262144 inodes
Filesystem UUID: cef6e3ef-5c15-4516-9c3a-fcdbcba147d9
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done

Step 9: Now check the details about RAID10 by using the below command

root@linuxhelp:~# sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Sep 28 08:35:40 2022
        Raid Level : raid10
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4

     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Wed Sep 28 08:36:06 2022
             State : clean
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : linuxhelp:0  (local to host linuxhelp)
              UUID : 669fb7db:74033e62:023c7f1d:280f40d3
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync set-A   /dev/sdb1
       1       8       33        1      active sync set-B   /dev/sdc1
       2       8       49        2      active sync set-A   /dev/sdd1
       3       8       65        3      active sync set-B   /dev/sde1

Step 10: Now create a directory for mount the raid by executing the below command

root@linuxhelp:~# mkdir data

Step 11: After the directory is created now mount the raid to the directory by using the below command

root@linuxhelp:~# sudo mount /dev/md0 data/

Step 12: Now list the disk if the raid is mounted or not by using the below command

root@linuxhelp:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            1.5G     0  1.5G   0% /dev
tmpfs           293M  1.5M  291M   1% /run
/dev/sda1        58G  7.3G   48G  14% /
tmpfs           1.5G     0  1.5G   0% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           293M   96K  292M   1% /run/user/1000
/dev/md0        3.9G   24K  3.7G   1% /root/data

Step 13: After the raid is mounted. Add files to the data directory for test the raid 10 by using the below command

root@linuxhelp:~# cd data/
root@linuxhelp:~/data#mkdir a b c
root@linuxhelp:~/data# touch ab ac ad

Step 14: Now list the directory by using the below command

root@linuxhelp:~/data# ls -la
total 40
drwxr-xr-x 6 root root  4096 Sep 28 08:43 .
drwx------ 6 root root  4096 Sep 28 08:38 ..
drwxr-xr-x 2 root root  4096 Sep 28 08:42 a
-rw-r--r-- 1 root root     0 Sep 28 08:43 ab
-rw-r--r-- 1 root root     0 Sep 28 08:43 ac
-rw-r--r-- 1 root root     0 Sep 28 08:43 ad
drwxr-xr-x 2 root root  4096 Sep 28 08:42 b
drwxr-xr-x 2 root root  4096 Sep 28 08:42 c
drwx------ 2 root root 16384 Sep 28 08:36 lost+found

Step 15: Now fail any one drives from the Raid 10 by using the below command

root@linuxhelp:~/data# sudo mdadm /dev/md0 -f /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0

Step 16: After that check, the details of raid 10 whether the drives failed are not by using the below command

root@linuxhelp:~/data# sudo  mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Wed Sep 28 08:35:40 2022
        Raid Level : raid10
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Wed Sep 28 08:39:01 2022
             State : clean, degraded
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : linuxhelp:0  (local to host linuxhelp)
              UUID : 669fb7db:74033e62:023c7f1d:280f40d3
            Events : 21

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       33        1      active sync set-B   /dev/sdc1
       2       8       49        2      active sync set-A   /dev/sdd1

       3       8       65        3      active sync set-B   /dev/sde1

       0       8       17        -      faulty   /dev/sdb1

Step 17: Now list the disk by using the below command

root@linuxhelp:~/data# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda       8:0    0   20G  0 disk  
├─sda1    8:1    0  3.7G  0 part  /swap
├─sda2    8:2    0    1K  0 part  
├─sda5    8:5    0  976M  0 part  /boot
└─sda6    8:6    0 15.3G  0 part  /
sdb       8:16   0   20G  0 disk  
└─sdb1    8:17   0    1G  0 part  
sdc       8:32   0   20G  0 disk  
└─sdc1    8:33   0    1G  0 part  
  └─md0   9:0    0    2G  0 raid6 /root/data
sdd       8:48   0    2G  0 disk  
└─sdd1    8:49   0    1G  0 part  
sde       8:64   0    2G  0 disk  
└─sde1    8:65   0    1G  0 part  
  └─md0   9:0    0    2G  0 raid6 /root/data
sdf       8:80   0    2G  0 disk  
└─sdf1    8:81   0    1G  0 part  
sr0      11:0    1  1.9G  0 rom 

Step 18: Now add one new drive to the raid 10 by executing the below command

root@linuxhelp:~/data# sudo mdadm /dev/md0 --add /dev/sde1
mdadm: added /dev/sde1

Step 19: Now check the details about raid if the drive is added or not by using the below command

root@linuxhelp:~/data# sudo mdadm --detail /dev/md0 
/dev/md0:
           Version : 1.2
     Creation Time : Thu Sep 29 08:14:17 2022
        Raid Level : raid10
        Array Size : 4188160 (3.99 GiB 4.29 GB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 4
     Total Devices : 5
       Persistence : Superblock is persistent

       Update Time : Mon Oct  3 03:26:20 2022
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 1
     Spare Devices : 0

            Layout : near=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : linuxhelp:0  (local to host linuxhelp)
              UUID : b080a0e2:49c2cbe5:f58fe2c6:f7b4e5b3
            Events : 38

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync set-A   /dev/sda1
       4       8       65        1      active sync set-B   /dev/sde1
       2       8       33        2      active sync set-A   /dev/sdc1
       3       8       49        3      active sync set-B   /dev/sdd1

       1       8       17        -      faulty   /dev/sdb1

Step 20: Now check the disk if the drives are mounted or not by using the below command

root@linuxhelp:~/data# lsblk
NAME    MAJ:MIN RM  SIZE RO TYPE   MOUNTPOINT
sda       8:0    0    2G  0 disk   
└─sda1    8:1    0    2G  0 part   
  └─md0   9:0    0    4G  0 raid10 /root/data
sdb       8:16   0    2G  0 disk   
└─sdb1    8:17   0    2G  0 part   
  └─md0   9:0    0    4G  0 raid10 /root/data
sdc       8:32   0    2G  0 disk   
└─sdc1    8:33   0    2G  0 part   
  └─md0   9:0    0    4G  0 raid10 /root/data
sdd       8:48   0    2G  0 disk   
└─sdd1    8:49   0    2G  0 part   
  └─md0   9:0    0    4G  0 raid10 /root/data
sde       8:64   0    2G  0 disk   
└─sde1    8:65   0    2G  0 part   
  └─md0   9:0    0    4G  0 raid10 /root/data
sdf       8:80   0   60G  0 disk   
├─sdf1    8:81   0   59G  0 part   /
├─sdf2    8:82   0    1K  0 part   
└─sdf5    8:85   0  975M  0 part   [SWAP]
sr0      11:0    1 1024M  0 rom    

Conclusion:

We have reached the end of this article. In this guide, we have walked you through the steps required to configure and test Raid 10 on Debian 11.3. Your feedback is much welcome.

FAQ
Q
Is RAID faster than a single drive?
A
A common RAID setup for volumes that are larger, faster, and safer than any single drive.
Q
Which RAID is the fastest?
A
RAID 0 is the only RAID type without fault tolerance. It is also by far the fastest RAID type. RAID 0 works by using striping, which disperses system data blocks across several different disks.
Q
Why is RAID 10 the best?
A
RAID 10 is secure because mirroring duplicates all your data. It's fast because the data is striped across multiple disks; chunks of data can be read and written to different disks simultaneously. To implement RAID 10, you need at least four physical hard drives. You also need a disk controller that supports RAID.
Q
How many minimum Disk required to configure Raid 10 setup?
A
you must have 4 hard disks to configure the Raid 10 setup.
Q
What is Raid 10?
A
RAID 10, also known as RAID 1+0, is a RAID configuration that combines disk mirroring and disk striping to protect data. It requires a minimum of four disks and stripes of data across mirrored pairs. As long as one disk in each mirrored pair is functional, data can be retrieved. If two disks in the same mirrored pair fail, all data will be lost because there is no parity in the striped sets.