Virtual machine disk array RAID0 RAID1 RAID5 RAID10 Build demonstration using RAID and LVM to deploy disk array technology!

**Redundant Array of RAID Disks**

RAID technology combines multiple hard disk devices into a disk array with larger capacity and better security, and cuts the data into multiple segments and stores them on different physical hard disk devices, and then uses scattered read and write technology To improve the overall performance of the disk array, and to synchronize multiple copies of important data to different physical hard disk devices at the same time, it has played a very good data redundancy backup effect.
Everything has its two sides. RAID technology does have a very good data redundancy backup function, but it also increases costs accordingly. It’s like we only had one phone book, but in order to avoid losing it, we wrote the contact number information in two copies. Naturally, we had to buy an extra phone book for this, which would increase the cost accordingly. The original intention of RAID technology is to reduce the cost of purchasing hard disk equipment, but compared with the value of data itself, modern enterprises value the redundant backup mechanism of RAID technology and the increase in hard disk throughput. Promote. In other words, RAID not only reduces the probability of data loss after the hard disk device is damaged, but also improves the read and write speed of the hard disk device. Therefore, it is widely deployed and applied in most operators or large and medium-sized enterprises.
For cost and technical considerations, it is necessary to make trade-offs between data reliability and read and write performance for different needs, and develop different solutions to meet their respective needs. At present, there are at least a dozen RAID disk array schemes, and Mr. Liu Dun will explain in detail the 4 most common schemes of RAID 0, RAID 1, RAID 5 and RAID 10.


RAID 0

RAID 0 technology connects multiple physical hard disk devices (at least two) in series by hardware or software to form a large volume group, and write data to each physical hard disk in turn. In this way, in the most ideal state, the read and write performance of the hard disk device will increase several times, but if any one hard disk fails, the data of the entire system will be destroyed. Generally speaking, RAID 0 technology can effectively improve the throughput of hard disk data, but it does not have data backup and error repair capabilities.
Insert picture description here

2. RAID 1

RAID 1 technology binds two or more hard disk devices. When writing data, it writes data to multiple hard disk devices at the same time (it can be regarded as a mirror or backup of data). When one of the hard disks fails, the normal use of data will generally be restored immediately by hot swapping.

Insert picture description here

3. RAID 5

RAID5 technology saves the data parity information of the hard disk device to other hard disk devices. The parity information of the data in a RAID 5 disk array group is not stored separately in a certain hard disk device, but stored on every other hard disk device except itself. The advantage is that any one of the devices will not be damaged after damage A fatal flaw occurs; the parity part of the figure stores the parity information of the data. In other words, the RAID 5 technology does not actually back up the real data information in the hard disk, but passes the parity information when the hard disk device has a problem. To try to reconstruct the damaged data. Technical features such as RAID "compromising" take into account the read and write speed of hard disk devices, data security and storage costs.
Insert picture description here
4. RAID 10

RAID 10 technology is a "combination" of RAID 1+RAID 0 technology. As shown in Figure 7-4, RAID 10 technology requires at least 4 hard disks to form. First, make two RAID 1 disk arrays separately to ensure data security; then implement RAID 0 for two RAID 1 disk arrays. Technology to further improve the read and write speed of hard disk devices. In this way, theoretically speaking, as long as not all hard drives in the same group are damaged, up to 50% of the hard drives can be damaged without losing data. Because RAID 10 technology inherits the high read and write speed of RAID 0 and the data security of RAID 1, the performance of RAID 10 surpasses RAID 5 regardless of cost, so it is currently a widely used storage technology.
Insert picture description here

参数	作用
-a	    检测设备名称
-n	    指定设备数量
-l	    指定RAID级别
-C      创建
-v	    显示过程
-f	    模拟设备损坏
-r	    移除设备
-Q	    查看摘要信息
-D	    查看详细信息
-S	    停止RAID磁盘阵列

mdadm   -C 创建
-v 显示创建的过程
-l 指定RAID的级别是0 1 5 10
-D 查看详细信息     完成之后可以查询
-f 模拟设备损坏 
-a 检测设备名称     -a yes 自动创建设备文件
-n 有几块盘 

删除占用mdadm /dev/md1 --fail /dev/sdc --remove /dev/sdc
mdadm --stop /dev/md1或者/绑定的文件 删除磁盘阵列
mdadm --remove /dev/md1或者/绑定的文件 删除磁盘阵列

Virtual machine simulates disk array experiment RAID 10 and switches the operation process of RAID 5

1. First add 4 disks in the virtual machine settings.
Insert picture description here
2. Query the disk and use the mdadm command to add the disk array!
Insert picture description here

[root@lizhiqiang Desktop]# mdadm -Cv /dev/md/zhuxing -a yes -n 4 -l 10 /dev/sd[c-f]
mdadm: layout defaults to n2
mdadm: layout defaults to n2
mdadm: chunk size defaults to 512K
mdadm: size set to 20954624K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/zhuxing started.
其中 /dev/md/zhuyxing 可以更改为/dev/md[0-无限]但必须有/dev/md格式 /dev/sd[c-f]可以设置为/dev/sdc /dev/sdd /dev/sdf /dev/sde

3. Format the main disk and create a folder in the home directory to mount the main disk

[root@lizhiqiang Desktop]# mkfs.ext4 /dev/md/zhuxing
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
2621440 inodes, 10477312 blocks
523865 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2157969408
320 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

[root@lizhiqiang Desktop]# cd ~
[root@lizhiqiang ~]# mkdir zhu
[root@lizhiqiang ~]# mount /dev/md/zhuxing /zhu
mount: mount point /zhu does not exist
[root@lizhiqiang ~]# mount /dev/md/zhuxing /root/zhu
其中在家目录中只写/zhu不可以挂载 只能用绝对路径挂载 显示挂载成功!

Add the disk to the startup item, and use the -D command to view the mounted partition information

[root@lizhiqiang ~]# echo "/dev/md/zhuxing /zhu ext4 defaults 0 0" >> /etc/fstab
[root@lizhiqiang ~]# mdadm -D /dev/md/zhuxing
/dev/md/zhuxing:
        Version : 1.2
  Creation Time : Tue Oct 20 06:43:30 2020
     Raid Level : raid10
     Array Size : 41909248 (39.97 GiB 42.92 GB)
  Used Dev Size : 20954624 (19.98 GiB 21.46 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Oct 20 06:52:37 2020
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

         Layout : near=2
     Chunk Size : 512K

           Name : lizhiqiang:zhuxing  (local to host lizhiqiang)
           UUID : 0a64eebf:9c26768e:88803e37:5ca70cdf
         Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/sdc
       1       8       48        1      active sync   /dev/sdd
       2       8       64        2      active sync   /dev/sde
       3       8       80        3      active sync   /dev/sdf

RAID 10 switch RAID 5

First use the umount command to unmount the disk, and then stop using the primary disk to prevent wasting the disk. After successfully unmounting, you can mount RAID 5

[root@lizhiqiang Desktop]# umount /dev/md/zhuxing
[root@lizhiqiang Desktop]# mdadm --stop /zhu
mdadm: error opening /zhu: Is a directory
[root@lizhiqiang Desktop]# mdadm --stop /dev/md/zhuxing
mdadm: stopped /dev/md/zhuxing
[root@lizhiqiang Desktop]# mdadm -D /dev/md/zhuxing
mdadm: cannot open /dev/md/zhuxing: No such file or directory

To load RAID 5, use the mdadm command and format it. The mdadm command needs to add a backup disk and use the -x command.
At this point, it will prompt that the partition is occupied by using y to force the use to succeed!

[root@lizhiqiang Desktop]# mdadm -Cv /dev/md/zhuxing -a yes -n 3 -l 5 -x 1 /dev/sd[c-f]
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: /dev/sdc appears to be part of a raid array:
    level=raid10 devices=4 ctime=Tue Oct 20 06:43:30 2020
mdadm: /dev/sdd appears to be part of a raid array:
    level=raid10 devices=4 ctime=Tue Oct 20 06:43:30 2020
mdadm: /dev/sde appears to be part of a raid array:
    level=raid10 devices=4 ctime=Tue Oct 20 06:43:30 2020
mdadm: /dev/sdf appears to be part of a raid array:
    level=raid10 devices=4 ctime=Tue Oct 20 06:43:30 2020
mdadm: size set to 20954624K
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/zhuxing started.
[root@lizhiqiang Desktop]# mdadm -D /dev/md/zhuxing
/dev/md/zhuxing:
        Version : 1.2
  Creation Time : Tue Oct 20 07:17:32 2020
     Raid Level : raid5
     Array Size : 41909248 (39.97 GiB 42.92 GB)
  Used Dev Size : 20954624 (19.98 GiB 21.46 GB)
   Raid Devices : 3
  Total Devices : 4
    Persistence : Superblock is persistent

    Update Time : Tue Oct 20 07:19:18 2020
          State : clean 
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 512K

           Name : lizhiqiang:zhuxing  (local to host lizhiqiang)
           UUID : 19cec61b:0d1c4f49:972ba0ec:fad30b55
         Events : 32

    Number   Major   Minor   RaidDevice State
       0       8       32        0      active sync   /dev/sdc
       1       8       48        1      active sync   /dev/sdd
       4       8       64        2      active sync   /dev/sde

       3       8       80        -      spare   /dev/sdf

Format the main disk and mount the main disk to access the startup items, RAID 5 is installed successfully

[root@lizhiqiang Desktop]# mkfs.ext4 /dev/md/zhuxing
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
2621440 inodes, 10477312 blocks
523865 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2157969408
320 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000, 7962624

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done   

[root@lizhiqiang Desktop]# mount /dev/md/zhuxing /zhu
[root@lizhiqiang Desktop]# df -h
Filesystem                        Size  Used Avail Use% Mounted on
/dev/mapper/rhel_lizhiqiang-root   18G  3.5G   15G  20% /
devtmpfs                          985M     0  985M   0% /dev
tmpfs                             994M  140K  994M   1% /dev/shm
tmpfs                             994M  8.9M  986M   1% /run
tmpfs                             994M     0  994M   0% /sys/fs/cgroup
/dev/sdb1                         2.0G   33M  2.0G   2% /opo
/dev/sda1                         497M  125M  373M  26% /boot
/dev/md127                         40G   49M   38G   1% /zhu
[root@lizhiqiang Desktop]# echo "/dev/md/zhuxing /zhu ext4 defaults 0 0" >> /etc/fstab
[root@lizhiqiang Desktop]# df -h
Filesystem                        Size  Used Avail Use% Mounted on
/dev/mapper/rhel_lizhiqiang-root   18G  3.5G   15G  20% /
devtmpfs                          985M     0  985M   0% /dev
tmpfs                             994M  140K  994M   1% /dev/shm
tmpfs                             994M  8.9M  986M   1% /run
tmpfs                             994M     0  994M   0% /sys/fs/cgroup
/dev/sdb1                         2.0G   33M  2.0G   2% /opo
/dev/sda1                         497M  125M  373M  26% /boot
/dev/md127                         40G   49M   38G   1% /zhu

Successful operation!

** Damaged disk array and repair **

The purpose of deploying a RAID10 disk array group in a production environment is to improve the IO reading and writing speed of storage devices and the security of data, but because this time is a hard disk device simulated on the local computer, the improvement of the reading and writing speed may not be possible. It is not intuitive, so Mr. Liu Dun decided to explain to the students how to deal with the damage of the RAID disk array group, so that after entering the operation and maintenance position, he will not be in a hurry due to emergencies. First confirm that a physical hard disk device is damaged and can no longer be used normally, you should use the mdadm command to remove it and check that the status of the RAID disk array group has been changed:

Remove a hard drive from the array to simulate a hard drive damage.

mdadm /dev/md0 -f /dev/sdb    #把/dev/sdb从磁盘阵列/dev/md0中移除
mdadm -D /dev/md0             #查看磁盘这列/dev/md0详细信息,发现/dev/sdb状态从active变为faulty
umount /RAID                  #先重启系统,卸载/RAID目录
mdadm /dev/md0 -a /dev/sdb    #把新硬盘添加到RAID磁盘阵列中
mdadm -D /dev/md0             #查看磁盘阵列/dev/md0详细信息,/dev/sdb正在 spare rebuilding,然后变回active
mount -a                      #重新挂载

Disk array + backup disk

When deploying a RAID 5 disk array, at least 3 hard disks are required, and another backup hard disk is required.
Restore the virtual machine and deploy a RAID 5 + 1 backup disk.

mdadm -Cv /dev/md0 -n 3 -l 5 -x 1 /dev/sd[b-e]           #用3块硬盘创建RAID 5磁盘阵列,再用1块作为备份盘
mdadm -D /dev/md0                                        #查看磁盘阵列详细信息,显示3个盘为actvie,1个盘为spare,RAID类型为RAID 5
mkfs.ext4 /dev/md0                                       
echo "/dev/md0 /RAID ext4 defaults 0 0" >> /etc/fstab    #往/etc/fstab文件追加挂载信息,以实现永久挂载
mkdir /RAID
mount -a
mdadm /dev/md0 -f /dev/sdb                               #故意移除RAID 5阵列中的其中一个盘(active的盘)
mdadm -D /dev/md0                                        #再查看磁盘阵列/dev/md0详细信息,显示备份盘自动定提上去并开始数据同步(spare rebuilding)。


Guess you like

Origin blog.csdn.net/SYH885/article/details/109186389