Linux-RAID disk array and array card (including the steps to create RAID5)


Preface

  • RAID (Redundant Array of Independent Disks), the full name is Redundant Array of Independent Disks
  • Simply put, RAID is a combination of multiple independent hard disks (physical hard disks) in different ways to form a hard disk group (logical hard disk), thereby providing higher storage performance than a single hard disk and providing data backup technology
  • From the user’s perspective, the composed disk group is like a hard disk, and the user can partition, format, etc.
  • The different ways of composing a disk array are called RAID Levels. RAID levels are also several different levels of RAID technology, which can provide different speeds, safety and cost performance.
  • Selecting the appropriate RAID level according to the actual situation can meet the user's requirements for the availability, performance and capacity of the storage system

1. Detailed explanation of RAID hard disk array

  • Combine multiple independent physical hard disks in different ways to form a hard disk group (logical hard disk) to provide higher storage performance than a single hard disk and provide data backup technology
  • RAID is divided into different levels, different levels of RAID have made different trade-offs in data reliability and read and write performance
  • The commonly used RAID levels are as follows:
    • RAID 0
    • RAID 1
    • RAID 5
    • RAID 6
    • RAID 1+0 etc.

1. Introduction to RAID 0 Disk Array

Insert picture description here

  • RAID 0 (striped storage)
  • RAID 0 continuously divides data in units of bits or bytes, and reads/writes data in parallel on multiple disks, so it has a high data transfer rate, but it has no data redundancy
  • RAID 0 simply improves performance, and does not provide a guarantee for the reliability of data, and one of the disk failures will affect all data. N hard disks in parallel combination
  • RAID 0 cannot be used in occasions with high data security requirements

2. Introduction to RAID 1 Disk Array

Insert picture description here

  • RAID 1 (mirrored storage)
  • Realize data redundancy through disk data mirroring, and generate mutually backup data on a pair of independent disks
  • When the original data is busy, the data can be read directly from the mirror copy, so RAID 1 can improve the read performance
  • RAID 1 has the highest unit cost in the disk array, but it provides high data security and availability. When a disk fails, the system can automatically switch to read and write on the mirror disk without reorganizing the failed data

3. Introduction to RAID 5 Disk Array

Insert picture description here

  • N (N>=3) disks form an array, one piece of data generates N-1 strips, and one copy of parity data. A total of N pieces of data are cyclically and evenly stored on N disks
  • N disks read and write at the same time, the read performance is very high, but due to the problem of the verification mechanism, the write performance is relatively low
  • (N-1)/N Disk utilization
  • High reliability, allowing one disk to be damaged without affecting all data

4. Introduction to RAID 6 Disk Array

Insert picture description here

  • N (N>=4) disks form an array, (N-2)/N disk utilization
  • Compared with RAID 5, RAID 6 adds a second independent parity information block
  • Two independent parity systems use different algorithms, even if two disks fail at the same time, it will not affect the use of data
  • Compared with RAID 5, there is greater "write loss", so the write performance is poor

5. RAID comparison table

RAID level Number of hard drives Disk utilization Is there a check? Protection ability Write performance
RAID0 N N no no N times of a single hard drive
RAID1 N (even number) N/2 no Allow a device failure Need to write two pairs of storage devices to prepare for each other
RAID5 N>3 (N-1)/N Have Allow a device failure Need to write calculation check
RAID6 N>4 (N-2)/N Have Allow two device failures Need to double write calculation verification
RAID1 + 0 N>=4 (even number) N/2 no Allow one of the two basis sets to be bad N/2 disks simultaneously write

6. Introduction to RAID 1+0 Disk Array

Insert picture description here

  • RAID 1+0 (mirroring first, then striping)
  • After N (even number, N>=4) disks are mirrored in pairs, they are combined into a RAID 0
  • N/2 disk utilization
  • N/2 disks write at the same time, N disks read at the same time
  • High performance and high reliability

2. Introduction of Array Card

1. Introduction to Array Card

  • Array card is a board used to realize RAID function
  • Usually composed of a series of components such as I/O processor, hard disk controller, hard disk connector and cache
  • Different RAID cards support different RAID functions:
    for example, support RAID0, RAID1, RAID5, RAID10, etc.
  • RAID card interface types:
    IDE interface, SCSI interface, SATA interface and SAS interface

2. Cache of array card

  • Cache is the place where the RAID card exchanges data with the external bus. The RAID card first transmits the data to the cache, and then the cache exchanges data with the external data bus.
  • The size and speed of the cache are important factors directly related to the actual transmission speed of the RAID card
  • Different RAID cards are equipped with different memory capacities at the factory, generally ranging from several megabytes to hundreds of megabytes.

Three, create a software RAID5 array steps

1. Add a hard disk to the virtual machine

Before adding, don’t forget that the virtual machine should be turned off.
Insert picture description here
Check the disk partition to verify whether the hard disk is created successfully.

[root@localhost ~]# fdisk -l        

磁盘 /dev/sda:42.9 GB, 42949672960 字节,83886080 个扇区
Units = 扇区 of 1 * 512 = 512 bytes
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节
磁盘标签类型:dos
磁盘标识符:0x0009ac95

   设备 Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048    12584959     6291456   83  Linux
/dev/sda2        12584960    54527999    20971520   83  Linux
/dev/sda3        54528000    62916607     4194304   82  Linux swap / Solaris
/dev/sda4        62916608    83886079    10484736    5  Extended
/dev/sda5        62918656    83886079    10483712   83  Linux

磁盘 /dev/sdb:42.9 GB, 42949672960 字节,83886080 个扇区
Units = 扇区 of 1 * 512 = 512 bytes
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节


磁盘 /dev/sdc:42.9 GB, 42949672960 字节,83886080 个扇区
Units = 扇区 of 1 * 512 = 512 bytes
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节


磁盘 /dev/sdd:42.9 GB, 42949672960 字节,83886080 个扇区
Units = 扇区 of 1 * 512 = 512 bytes
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节


磁盘 /dev/sde:42.9 GB, 42949672960 字节,83886080 个扇区
Units = 扇区 of 1 * 512 = 512 bytes
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节

2. Check if the mdadm package is installed

[root@localhost ~]# rpm -q mdadm 
mdadm-4.0-5.el7.x86_64

3. Create a partition, create the other three in the same way

Change the partition ID number to "fd"

[root@localhost ~]# fdisk /dev/sdb
欢迎使用 fdisk (util-linux 2.23.2)。

更改将停留在内存中,直到您决定将更改写入磁盘。
使用写入命令前请三思。

Device does not contain a recognized partition table
使用磁盘标识符 0x7f2f5d10 创建新的 DOS 磁盘标签。

命令(输入 m 获取帮助):n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): p
分区号 (1-4,默认 1):
起始 扇区 (2048-83886079,默认为 2048):
将使用默认值 2048
Last 扇区, +扇区 or +size{K,M,G} (2048-83886079,默认为 83886079):
将使用默认值 83886079
分区 1 已设置为 Linux 类型,大小设为 40 GiB

命令(输入 m 获取帮助):t
已选择分区 1
Hex 代码(输入 L 列出所有代码):fd
已将分区“Linux”的类型更改为“Linux raid autodetect”

命令(输入 m 获取帮助):p

磁盘 /dev/sdb:42.9 GB, 42949672960 字节,83886080 个扇区
Units = 扇区 of 1 * 512 = 512 bytes
扇区大小(逻辑/物理):512 字节 / 512 字节
I/O 大小(最小/最佳):512 字节 / 512 字节
磁盘标签类型:dos
磁盘标识符:0x7f2f5d10

   设备 Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048    83886079    41942016   fd  Linux raid autodetect

命令(输入 m 获取帮助):w
The partition table has been altered!

Calling ioctl() to re-read partition table.
正在同步磁盘。

4. Create a RAID device

[root@localhost ~]# mdadm -Cv /dev/md5 -l5 -n3 /dev/sd[b-d]1 -x1 /dev/sde1
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 41909248K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md5 started.
  • -C: New
  • -v: display detailed information about the creation process
  • /dev/md5: Create the name of RAID5
  • -a yes: -auto, which means that if any device file does not exist, it will be created automatically, which can be omitted
  • -l: Specify the RAID level, l5 means to create RAID5
  • -n: Specify to use several hard disks to create RAID, n3 means to use 3 hard disks to create RAID
  • /dev/sd[bd]1: Specify these four disk partitions to create RAID
  • -x: Specify the use of extremely fast hard disks as hot spare disks for RAID, x1 means to reserve a free hard disk as a spare
  • /dev/sde1: designated as a spare hard disk

5. View the progress of creating RAID

  • Here you can see the current completion percentage (37.5%), the waiting time for completion and the transmission speed

  • [UUU] To complete, you can enter this command multiple times to view the current completion progress

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md5 : active raid5 sdd1[4] sde1[3](S) sdc1[1] sdb1[0]
      83818496 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [=======>.............]  recovery = 37.5% (15727256/41909248) finish=2.1min speed=203812K/sec
      
unused devices: <none>
[root@localhost ~]# 
[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md5 : active raid5 sdd1[4] sde1[3](S) sdc1[1] sdb1[0]
      83818496 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
      [=============>.......]  recovery = 68.2% (28595200/41909248) finish=1.0min speed=207154K/sec
      
unused devices: <none>

  • You can also view the progress of RAID creation with the following command
  • Here you can see that three are active and the other is spare
[root@localhost ~]# mdadm -D /dev/md5 
/dev/md5:
           Version : 1.2
     Creation Time : Wed Nov 25 16:24:23 2020
        Raid Level : raid5
        Array Size : 83818496 (79.94 GiB 85.83 GB)
     Used Dev Size : 41909248 (39.97 GiB 42.92 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Wed Nov 25 16:27:54 2020
             State : clean 
    Active Devices : 3
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : e46bf95b:84550d7a:6fd09dc9:66ba9f9f
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1
       4       8       49        2      active sync   /dev/sdd1

       3       8       65        -      spare   /dev/sde1
 

6. Realize failure recovery

  • Simulate /dev/sdc1 failure
[root@localhost ~]# mdadm /dev/md5 -f /dev/sdc1 
mdadm: set /dev/sdc1 faulty in /dev/md5
  • Check again, you will find that the sdc is in a faulty state
[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid6] [raid5] [raid4] 
md5 : active raid5 sdd1[4] sde1[3] sdc1[1](F) sdb1[0]
      83818496 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [U_U]
      [==>..................]  recovery = 14.3% (6008320/41909248) finish=2.9min speed=200277K/sec
      
unused devices: <none>
[root@localhost ~]# mdadm -D /dev/md5 
/dev/md5:
           Version : 1.2
     Creation Time : Wed Nov 25 16:24:23 2020
        Raid Level : raid5
        Array Size : 83818496 (79.94 GiB 85.83 GB)
     Used Dev Size : 41909248 (39.97 GiB 42.92 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Wed Nov 25 16:34:28 2020
             State : clean, degraded, recovering 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 1

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

    Rebuild Status : 32% complete

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : e46bf95b:84550d7a:6fd09dc9:66ba9f9f
            Events : 25

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       3       8       65        1      spare rebuilding   /dev/sde1
       4       8       49        2      active sync   /dev/sdd1

       1       8       33        -      faulty   /dev/sdc1
  • View found that sde1 has replaced sdc1
[root@localhost ~]# mdadm -D /dev/md5 
/dev/md5:
           Version : 1.2
     Creation Time : Wed Nov 25 16:24:23 2020
        Raid Level : raid5
        Array Size : 83818496 (79.94 GiB 85.83 GB)
     Used Dev Size : 41909248 (39.97 GiB 42.92 GB)
      Raid Devices : 3
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Wed Nov 25 16:36:55 2020
             State : clean 
    Active Devices : 3
   Working Devices : 3
    Failed Devices : 1
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : resync

              Name : localhost.localdomain:5  (local to host localhost.localdomain)
              UUID : e46bf95b:84550d7a:6fd09dc9:66ba9f9f
            Events : 37

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       3       8       65        1      active sync   /dev/sde1
       4       8       49        2      active sync   /dev/sdd1

       1       8       33        -      faulty   /dev/sdc1

7. Create and mount the file

Finally, create a md5 directory under the root directory, and then format the RAID

[root@localhost ~]# cd ..
[root@localhost ~]# mkdir /md5

[root@localhost ~]# mkfs.xfs /dev/md5
[root@localhost ~]# cd  /md5
[root@localhost ~]# touche md5.txt
[root@localhost md5]# ls
md5.txt

Finally, check, here you can see that it is 80G instead of 120G. As we learned earlier, the utilization rate of RAID5 is only 3/4

[root@localhost md5]# df -hT
文件系统       类型      容量  已用  可用 已用% 挂载点
/dev/sda2      xfs        20G  4.3G   16G   22% /
devtmpfs       devtmpfs  898M     0  898M    0% /dev
tmpfs          tmpfs     912M     0  912M    0% /dev/shm
tmpfs          tmpfs     912M  9.1M  903M    1% /run
tmpfs          tmpfs     912M     0  912M    0% /sys/fs/cgroup
/dev/sda5      xfs        10G   37M   10G    1% /home
/dev/sda1      xfs       6.0G  174M  5.9G    3% /boot
tmpfs          tmpfs     183M  4.0K  183M    1% /run/user/42
tmpfs          tmpfs     183M   28K  183M    1% /run/user/0
/dev/sr0       iso9660   4.3G  4.3G     0  100% /run/media/root/CentOS 7 x86_64
/dev/md5       xfs        80G   33M   80G    1% /md5

Guess you like

Origin blog.csdn.net/weixin_51486343/article/details/110131356