Linux server hardware and RAID configuration (detailed diagram)

1. RAID disk array

1.1 Commonly used RAID levels

  • It is the abbreviation of Redundant Array of Independent Disks, Chinese abbreviated as Redundant Array of Independent Disks
  • Combine multiple independent physical hard disks in different ways to form a hard disk group (logical hard disk), thereby providing higher storage performance than a single hard disk and providing data backup technology
  • The different ways of composing a disk array are called RAID Levels (RAID Levels)
  • Common RAID levels: RAID 0, RAID 1, RAID 5, RAID 6, RAID 1+0, etc.

1.2 RAID 0 (striped storage)

Insert picture description here

  • RAID 0 continuously divides data in units of bits or bytes, and reads/writes them on multiple disks in parallel, so it has a high data transfer rate, but it has no data redundancy.
  • RAID 0 simply improves performance, and does not provide a guarantee for data reliability, and one of the disk failures will affect all data
  • RAID 0 cannot be used in occasions with high data security requirements

1.3 RAID 1 (mirrored storage)

Insert picture description here

  • Realize data redundancy through disk data mirroring, and generate mutually backup data on a pair of independent disks
  • When the original data is busy, the data can be read directly from the mirror copy, so RAID 1 can improve the read performance
  • RAID 1 has the highest unit cost in the disk array. But it provides high data security and availability. When a disk fails, the system can automatically switch to the mirror disk for reading and writing, without the need to reorganize the failed data

1.4 RAID 5

Insert picture description here

  • N (N≥3) disks form an array. One piece of data generates N-1 stripes, and there is also a piece of check data. A total of N pieces of data are stored cyclically and evenly on N disks.
  • N disks read and write at the same time, the read performance is very high, but due to the problem of the verification mechanism, the write performance is relatively low
  • (N-1)/N Disk utilization
  • High reliability, allowing one disk to be damaged without affecting all data

1.5 RAID6

Insert picture description here

  • N (N≥4) disks form an array, (N-2)/N disk utilization
  • Compared with RAID 5, RAID 6 adds a second independent block of parity information
  • Two independent parity systems use different algorithms, even if two disks fail at the same time, it will not affect the use of data
  • Compared with RAID 5, there is greater "write loss", so the write performance is poor

1.6 RAID 1+0 (mirror first, then stripe)

Insert picture description here

  • N (even number, N>=4). After the two disks are mirrored in pairs, they are combined into a RAID 0
  • N/2 disk utilization
  • N/2 disks write at the same time, N disks read at the same time
  • High performance and high reliability

1.7RAID 0+1 (stripe first, then mirror image)

Insert picture description here

  • The read and write performance is the same as RAID 10
  • Security is lower than RAID 10
RAID level Number of hard drives Disk utilization Is there a check? Protection ability Read and write performance
RAID0 N N no no N times of a single hard drive
RAID1 N (even number) N/2 no Allow a device failure Need to write two pairs of storage devices, each as a backup
RAID5 N>=3 (N-1)/N Have Allow a device failure Need to write calculation check
RAID6 N>=4 (N-2)/N Have Allow two device failures Need to double write calculation verification
RAID10 N>=4 (even number) N/2 no Allow one of the two basis sets to be bad Simultaneous writing of N/2 disks

2. Create a soft RAID disk array

2.1 Check the command package mdadm

rpm -q mdadm
yum install -y mdadm

Insert picture description here

2.2 fdisk tool

Use the fdisk tool to divide the new disk devices /dev/sdb, /dev/sdc, /dev/sdd, /dev/sde into primary partitions sdb1, sdc1, sdd1, sde1, and change the ID mark number of the partition type to "fd "

First create 4 new hard disks for testing.
Insert picture description here
Insert picture description here
Use the fdisk tool to divide the primary partition and change the ID of the partition type to "fd"

fdisk /dev/sdb
fdisk /dev/sdc

Insert picture description here

2.3 Create RAID Device

Create RAID5

mdadm -C -v /dev/md0 [-a yes] -l5 -n3 /dev/sd[bcd]1 -x1 /dev/sde1
 
-C:表示新建;
-v:显示创建过程中的详细信息。
/dev/md0:创建 RAID5 的名称。
-a yes:--auto,表示如果有什么设备文件没有存在的话就自动创建,可省略。
-l:指定 RAID 的级别,l5 表示创建 RAID5。
-n:指定使用几块硬盘创建 RAID,n3 表示使用 3 块硬盘创建 RAID。
/dev/sd[bcd]1:指定使用这四块磁盘分区去创建 RAID。
-x:指定使用几块硬盘做RAID的热备用盘,x1表示保留1块空闲的硬盘作备用
/dev/sde1:指定用作于备用的磁盘

Insert picture description here

2.4 Create RAID10 (mirror first, then stripe)

mdadm -Cv /dev/md0 -l1 -n2 /dev/sd[bc]1
mdadm -Cv /dev/md1 -l1 -n2 /dev/sd[de]1
mdadm -Cv /dev/md10 -l0 -n2 /dev/md0 /dev/md1

2.5 View RAID Disk Details

cat /proc/mdstat		           查看RAID磁盘详细信息和创建RAID的进度
或者
mdadm -D /dev/md0                  

watch -n 10 'cat /proc/mdstat'     用watch命令来每隔一段时间刷新/proc/mdstat 的输出

mdadm -E /dev/sd[b-e]1             检查磁盘是否已做RAID

Insert picture description here
Insert picture description here

2.6 Create and mount a file system

mkfs -t xfs /dev/md0
mkdir /myraid
mount /dev/md0 /myraid/
df -Th
cp /etc/fstab /etc/fstab.bak
vim /etc/fstab
/dev/md0      /myraid        xfs   	 defaults   0  0

Create and mount
Insert picture description here
Insert picture description here

2.7 Achieve failure recovery

mdadm /dev/md0 -f /dev/sdb1 		模拟/dev/sdb1 故障
mdadm -D /dev/md0					查看发现sde1已顶替sdb1

mdadm命令其它常用选项
-r:移除设备
-a:添加设备
-S:停止RAID
-A:启动RAID

2.8 Create the /etc/mdadm.conf configuration file to facilitate the management of software RAID configuration, such as start and stop

echo 'DEVICE /dev/sdc1 /dev/sdb1 /dev/sdd1' > /etc/mdadm.conf
mdadm --detail --scan >> /etc/mdadm.conf

Guess you like

Origin blog.csdn.net/zhangyuebk/article/details/113667045