RAID disk array introduction
is the abbreviation of Redundant Array of Independent Disks, referred to as Redundant Array of Independent Disks.
Multiple independent physical hard disks are combined in different ways to form a hard disk group (logical hard disk), thus providing a higher than a single hard disk.
The different ways of storage performance and data backup technology to form a disk array are called RAID Levels.
Commonly used RAID levels
RAID0, RAID1, RAID5, RAID6, RAID1+0, etc.
Introduction to RAID 0 Disk Array
RAID 0 (striped storage)
RAID 0 continuously divides data in units of bits or bytes, and reads/writes to multiple disks in parallel, so it has a high data transfer rate, but it has no data redundancy. More than
RAID 0 is simply to improve performance, but not one of disk failure will affect all of the data to provide reliable data protection
with high RAID 0 can not be applied to the data security requirements of the occasion
Introduction to RAID 1 Disk Array
RAID 1 (mirrored storage)
realizes data redundancy through disk data mirroring, and generates mutually backup data on a pair of independent disks.
When the original data is busy, the data can be directly read from the mirror copy, so RAID 1 can improve read performance.
RAID 1 has the highest unit cost in the disk array, but it provides high data security and availability. When a disk fails, the system can automatically switch to the mirror disk for reading and writing, without the need to reorganize the failed data
Introduction to RAID 5 Disk Arrays
RAID 5
N (N>=3) disks form an array, one piece of data generates N-1 stripes, and one copy of parity data. A total of N pieces of data are cyclically balanced on N disks. Storage
N disks are read and written at the same time, and the read performance is very high, but due to the problem of the check mechanism, the write performance is relatively low
(N-1)/The
reliability of the N disk utilization is high, and one disk is allowed to be damaged without affecting all data
Introduction to RAID 6 Disks
RAID 6
N (N>=4) disks form an array, (N=2)/N disk utilization is
compared with RAID 5, RAID 6 adds a second independent parity information block
two The independent parity system uses different algorithms, even if two disks fail at the same time, it will not affect the use
of data. Compared with RAID 5, there is a greater "write loss", so the write performance is poor.
RAID 1+0 disk array introduction
RAID 1+0 (mirroring first, striping)
N (even number, N>=4) After two disks are mirrored in pairs, they are combined into a RAID 0
N/2 disk utilization
N /2 disks are written at the same time, N disks
have high read performance at the same time, and high reliability.
RAID 0+1 (stripe first, mirroring) The
read and write performance is the same as RAID 10 and the
security is lower than RAID 10.
RAID Level Number of Hard Disks Disk Utilization Checking Protection Capability Write Performance
RAID0 N N None None N times the size of a single hard disk
RAID1 N (even number) N/2 None Allow one device to fail Two pairs of storage devices need to be written to each other as the primary and backup
RAID5 N >=3 (N-1)/N
Yes, one device failure requires write calculation check RAID6 N>=4 (N-2)/N Yes, two device failures require double write calculation check
RAID10 N>=4( Even) N/2 None Allows one N/2 disk in each of the two base groups to be written at the same time
Array card introduction
Array card is a board used to realize the RAID function.
It is usually composed of a series of components such as l/o processor, hard disk controller, hard disk connector and cache.
Different RAID cards support different RAID functions,
such as supporting RAID0. , RAID1, RAID5, RAID10, etc.
RAID card interface type
IDE interface, SCSI interface, SATA interface and SAS interface
The cache of the array card
is the place where the RAID card exchanges data with the external bus. The RAID card first transfers the data to the cache, and then the cache and the external data bus exchange data
. The size and speed of the cache are directly related to the reality of the RAID card. Important factors for transmission speed.
Different RAID cards are equipped with different memory capacities when they leave the factory, generally ranging from several megabytes to hundreds of megabytes.
Create a soft RAID disk array Step
1. Check whether the mdadm software package has been installed
rpm -q mdadm
yum install -y mdadm
查看
rpm -q mdadm
或
rpm -qa | grep “mdadm”
系统默认装了,没有就yum装
yum -y install mdadm-4.0-5.el7.x86_64
2. Use the fdisk tool to divide the new disk device /dev/sdb, /dev/sdc, /dev/sdd, /dev/sde into primary partitions sdb1, sdc1, sdd1, sde1, and change the ID mark number of the partition type to "fd"
fdisk /dev/sdb
fdisk /dev/sdc
fdisk /dev/sdb (sdc\sdd\sde)
3. Create a RAID device
mdadm -E /dev/md0 检查磁盘是否已做RAID
mdadm -C -v /dev/md0 -l5 -n3 /dev/sd[bcd]1 -x1 /dev/sde1 创建一个名为md0的RAID
-l5:RAID是RAID5
-n3:用三个硬盘
-x1:备份数1
mdadm -D /dev/md0 查看磁盘详细信息
cat /proc/mdstat 能查看创建raid进度及磁盘详细信息
-C:表示新建
-v:显示创建过程中的详细信息
/dev/md0:创建 RAID5 的名称
-a yes:--auto,表示如果有什么设备文件没有存在的话就自动创建,可省略
-l:指定 RAID 的级别,l5 表示创建 RAID5
-n:指定使用几块硬盘创建 RAID,n3 表示使用 3 块硬盘创建 RAID
/dev/sd[bcd]1:指定使用者3块磁盘分区去创建 RAID
-x:指定使用几块硬盘做RAID的热备用盘,x1表示保留一块空闲的硬盘作备用
/dev/sde1:指定用作于备用的磁盘
#Create RAID10 (mirroring first, then striping) mdadm -Cv /dev/md0 -11 -n2 /dev/sd[bc]1
mdadm -Cv /dev/md1 -11 -n2 /dev/sd[de] 1
mdadm -Cv /dev/md10 -10 -n2 /dev/md0 /dev/md1
mdadm -Cv /dev/md0 -l1 -n2 /dev/sd[bc]1
mdadm -Cv /dev/md1 -l1 -n2 /dev/sd[de]1
mdadm -Cv /dev/md10 -l0 -n2 /dev/md0 /dev/md1
#View RAID disk details cat /proc/mdstat #You can also view the progress of creating RAID
or
mdadm -D /dev/md0
#Use the watch command to refresh the output of /proc/mdstat at regular intervals
watch -n 10'cat /proc/mdstat'
#Check whether the disk has been RAID mdadm -E /dev/sd[be]1
4. Create and mount the file system
mkfs -t xfs /dev/md10
mkdir /myraid
mount /dev/md0 /myraid/
df -Th
cp /etc/fstab /etc/fstab.bak
vim /etc/fstab
/dev/md0 /myraid xfs defaults 0 0
mkfs -t xfs /dev/md10
mkdir /myraid
mount /dev/md10 /myraid/
df -Th
5. Realize fault recovery
mdadm /dev/md0 -f /dev/sdb1
#simulate /dev/sdb1 fault mdadm -D /dev/md0 #Check that sde1 has replaced sdb1
6. Create the /etc/mdadm.conf configuration file to facilitate the management of software RAID configuration, such as starting and stopping
echo'DEVICE /dev/sdc1 /dev/sdb1 /dev/sdd1' > /etc/mdadm.conf
mdadm --detail --scan >> /etc/mdadm.conf
Other common options of mdadm command
-r: remove device
-a: add device
-S: stop RAID
-A: start RAID
mdadm /dev/md0 -f /dev/sdb1
mdadm /dev/md0 -r /dev/sdb1
mdadm /dev/md0 -a /dev/sdb1
echo'DEVICE /dev/sdc1 /dev/sdb1 /dev/sdd1' > /etc/mdadm.conf
mdadm --detail --scan >> /etc/mdadm.conf
mdadm -S /dev/md0
mdadm -As /dev /md0
#-s: Refers to find the configuration information in the /etc/mdadm.conf file