Disk array-RAID

Disk array-RAID

Redundant Array of Independent Disk (RAID) was originally developed to combine small cheap disks to replace large expensive disks, and at the same time, it was hoped that data access would not be lost when the disks failed and a certain level of data access would be lost. protection technology.

RAID0

  • At least two disks are required
  • Data striping, distribution to disk, high read and write performance, high storage utilization
  • There is no redundancy strategy for the data. If a disk fails, the data cannot be recovered.
  • Scenes:
    • Scenarios with high performance requirements but low data security and reliability requirements, such as audio and video

Insert image description here

raid1

  • At least two disks are required
  • Data>Mirror backup>Write to disk (working disk and mirror disk), high reliability, disk utilization rate is 50%
  • Read performance is better than write performance
  • A disk failure will not affect the reading and writing of data.
  • Scenes:
    • Scenarios that require high data security and reliability: email systems, trading systems, etc.
      Insert image description here

RAID5

  • At least three disks are required
  • Data is stored in strips on the disk, with good read and write performance, and the disk utilization rate is (n-1)/n
  • If a disk fails, the damaged data can be reconstructed based on other databases and corresponding verification data (consuming performance)
  • Consider all aspects of storage, data security and cost

Insert image description here

RAID6

  • At least 4 disks are required
  • Data is stored in disk strips, with good read performance and strong fault tolerance.
  • Use double verification to ensure data security
  • If two disks fail at the same time, the disk data can be reconstructed through two checksums.
  • High cost and complex construction
    Insert image description here

RAID10

  • Combination of RAID1 and RAID0
  • At least 4 disks are required
  • Use two disks as a set to create RAID1 first, and then use the completed RAID1 to create RAID0.
  • Taking into account data redundancy and read and write performance

Insert image description here

Soft RAID creation

Create RAID0

Prepare a disk and divide it into multiple areas

yum -y install mdadm
#创建raid0

[root@workstation ~]# mdadm --create /dev/md0 --raid-devices=2 /dev/sdc[12] --level=0
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
#查看RAID信息
[root@workstation ~]# cat /proc/mdstat
Personalities : [raid0]
md0 : active raid0 sdc2[1] sdc1[0]
      4188160 blocks super 1.2 512k chunks

unused devices: <none>

#查看指定RAID信息

[root@workstation ~]# mdadm -D /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Fri Aug  4 03:32:04 2023
        Raid Level : raid0
        Array Size : 4188160 (3.99 GiB 4.29 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Fri Aug  4 03:32:04 2023
             State : clean
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

        Chunk Size : 512K

Consistency Policy : none

              Name : workstation:0  (local to host workstation)
              UUID : bae959ef:7318753c:91ebd9b8:9a4a7f3b
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       34        1      active sync   /dev/sdc2
#格式化挂载使用
[root@workstation ~]# mkfs.ext4 /dev/md0

[root@workstation ~]# mount /dev/md0 /mnt01

Create RAID1

[root@workstation ~]# mdadm -C /dev/md1 -l 1 -n 2 /dev/sdc[12]
#查看状态信息
[root@workstation ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdc2[1] sdc1[0]
      2094080 blocks super 1.2 [2/2] [UU]

unused devices: <none>

[root@workstation ~]# mkfs.ext4 /dev/md1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
131072 inodes, 523520 blocks
26176 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
#挂载
[root@workstation ~]# mkdir /mnt01
[root@workstation ~]# mount /dev/md1 /mnt01
[root@workstation ~]# lsblk
NAME                    MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda                       8:0    0   20G  0 disk
├─sda1                    8:1    0    1G  0 part  /boot
└─sda2                    8:2    0   19G  0 part
  ├─centos_servera-root 253:0    0   17G  0 lvm   /
  └─centos_servera-swap 253:1    0    2G  0 lvm   [SWAP]
sdb                       8:16   0   20G  0 disk
└─sdb1                    8:17   0   20G  0 part
sdc                       8:32   0   20G  0 disk
├─sdc1                    8:33   0    2G  0 part
│ └─md1                   9:1    0    2G  0 raid1 /mnt01
└─sdc2                    8:34   0    2G  0 part
  └─md1                   9:1    0    2G  0 raid1 /mnt01
sr0                      11:0    1  918M  0 rom

#模拟一块盘失效

[root@workstation ~]# mdadm /dev/md1 -f /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md1
[root@workstation ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdc2[1] sdc1[0](F)
      2094080 blocks super 1.2 [2/1] [_U]

unused devices: <none>
#移除失效盘

[root@workstation ~]# mdadm /dev/md1 -r /dev/sdc1
mdadm: hot removed /dev/sdc1 from /dev/md1
[root@workstation ~]# mdadm -D /dev/md1
/dev/md1:
           Version : 1.2
     Creation Time : Fri Aug  4 04:03:58 2023
        Raid Level : raid1
        Array Size : 2094080 (2045.00 MiB 2144.34 MB)
     Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

       Update Time : Fri Aug  4 04:20:06 2023
             State : clean, degraded
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : workstation:1  (local to host workstation)
              UUID : 2d7ed32a:ed703ee1:eaa2a801:4391bfcd
            Events : 20

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       34        1      active sync   /dev/sdc2
#加入新磁盘
[root@workstation ~]# mdadm /dev/md1 -a /dev/sdc1
mdadm: added /dev/sdc1
[root@workstation ~]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdc1[2] sdc2[1]
      2094080 blocks super 1.2 [2/2] [UU]

unused devices: <none>

Create RAID5

Prepare four disk partitions

[root@workstation ~]# lsblk | grep sdc
sdc                       8:32   0   20G  0 disk
├─sdc1                    8:33   0    1G  0 part
├─sdc2                    8:34   0    1G  0 part
├─sdc3                    8:35   0    1G  0 part
└─sdc4                    8:36   0    1G  0 part



[root@workstation ~]# mdadm -C /dev/md5 -l 5 -n 3 -x 1 /dev/sdc{1,2,3,4}
#挂载
[root@workstation ~]# mkdir /usr-mnt
[root@workstation ~]# mount /dev/md5 /usr-mnt
#标记一块盘失效

mdadm: set /dev/sdc4 faulty in /dev/md5
[root@workstation ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md5 : active raid5 sdc3[4] sdc4[3](F) sdc2[1] sdc1[0]
      2093056 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>
#raid5依然可用

[root@workstation ~]# echo "hello" > /usr-mnt/file
[root@workstation ~]# ls /usr-mnt/file
/usr-mnt/file

Guess you like

Origin blog.csdn.net/weixin_51882166/article/details/132108097