Introduction to Linux-RAID Disk Array

1. Detailed explanation of RAID disk array

1. Meaning

  • RAID disk array is the abbreviation of Redundant Array of Independent Disks, Chinese abbreviated as Redundant Array of Independent Disks. Multiple independent physical hard disks are combined in different ways to form a hard disk group (logical hard disk), thus providing higher than a single hard disk. Storage performance and the different ways of providing data backup technology to form a disk array are called RAID Levels (RAID Levels)

2. Commonly used RAID levels

RAID 0 , RAID 1 , RAID 5 , RAID 6 , RAID1 0

3、RAID 0

  • RAID 0 (striped storage)
    RAID 0 continuously divides data in units of bits or bytes, and reads/writes them on multiple disks in parallel, so it has a high data transfer rate, but it has no data redundancy.
    RAID 0 is just pure Improved performance does not provide a guarantee for data reliability, and one of the disk failures will affect all data.
    RAID 0 cannot be used in occasions with high data security requirements.
    Insert picture description here

4、RAID 1

  • RAID 1 (mirror storage)
    RAID 1 realizes data redundancy through disk data mirroring, and generates mutually backup data on a pair of independent disks.
    When the original data is busy, the data can be directly read from the mirror copy, so it can Improve read performance
    RAID 1 is the highest unit cost in the disk array, but provides high data security and availability. When a disk fails, the system can automatically switch to the mirror disk for reading and writing, without the need to reorganize the failed data
    Insert picture description here

5、RAID 5

  • RAID 5
    RAID 5 is composed of N (N>=3) disks. One piece of data produces N-1 stripes, and there is also one piece of parity data. A total of N pieces of data are stored in a circular and balanced manner on N disks.
    RAID 5 N disks read and write at the same time, and the read performance is very high, but due to the problem of the check mechanism, the write performance is relatively low.
    RAID 5 disk utilization rate is (N-1)/N
    RAID 5 has high reliability and allows failure 1 disk, does not affect all data
    Insert picture description here

6、RAID 6

  • RAID 6
    RAID 6 is composed of N (N>=4) disks, and (N-2)/N disk utilization.
    Compared with RAID 5, RAID 6 adds a second independent parity information block
    RAID 6 two An independent parity system uses different algorithms, even if two disks fail at the same time, it will not affect the use of data.
    Compared with RAID 5, RAID 6 has a greater "write loss", so the write performance is poor.
    Insert picture description here

7、RAID 1+0

  • RAID 1+0 (mirror first, then stripe)
    N (even number, N>=4) After two disks are mirrored, they are combined into a RAID 0
    N/2 disk utilization rate
    N/2 disks are written at the same time , N disks can be read simultaneously
    with high performance and reliability
    Insert picture description here

8、RAID 0+1

  • RAID 0+1 (stripe first, then mirror)
    read and write performance is the same as RAID 10,
    security is lower than RAID 10
    Insert picture description here
AID level Number of hard drives Disk utilization Is there a check? Protection ability Write performance
AID0 N N no no N times of a single break
AID1 N (even number) N /2 no Allow a device failure Need to write two pairs of storage devices, each as a backup
AID5 N>-3 (N-1)/N Have Allow a device failure Need to write calculation check
AID6 N>-4 (N-2)/ N Have Allow two device failures Need to double write calculation verification
Aid10 N>-4 (even number) N /2 no Allow one of the two basis sets to be bad Simultaneous writing of N/2 disks

2. Array card introduction and real machine configuration

1. Overview of the array card

  • Array card
  1. Array card is a board used to realize RAID function
  2. Usually composed of a series of components such as I/O processor, hard disk controller, hard disk connector and cache
  3. Different RAID cards support different RAID functions,
    such as RAIDO, RAID1, RAID5, RAID10, etc.
  4. RAID card interface type
    IDE interface, SCSI interface, SATA interface and SAS interface

2. Cache of the array card


  • Cache of the array card Cache is the place where the RAID card exchanges data with the external bus. The RAID card first transfers the data to the cache, and then exchanges data between the cache and the external data bus
    . The size and speed of the cache are directly related to the reality of the RAID card. Important factors for transmission speed
    Different RAID cards are equipped with different memory capacities when they leave the factory, generally ranging from several megabytes to hundreds of megabytes.

3. Example: Building a soft RAID5 disk array

  • Steps to create a soft RAID5 disk array:
  1. Create 4 10G hard disks in the virtual machine and boot
    Insert picture description here
  2. Check whether the mdadm package has been installed
    rpm -q mdadm
    yum install -y mdadm
    Insert picture description here
  3. View the partition situation and create a partition (ID tag number is "fd")
    fdisk -l-view the partition
    fdisk /dev/sd[be] -create a partition (the picture is an example of sdb partition)
    Insert picture description here
    Insert picture description here
    Insert picture description here
  4. Create a RAID5 device
    mdadm -Cv /dev/md5 -l5 -n3 /dev/sd[bcd]1 -x1 /dev/sde1
    -C: means to create a new:
    -v: display detailed information during the creation process.
    /dev/md5: Create the name of RAID5.
    -a yes: --auto, which means that if any device file does not exist, it will be created automatically, which can be omitted.
    -l: Specify the level of RAID, l5 means to create RAID5.
    -n: Specifies to use several hard disks to create RAID, n3 means to use 3 hard disks to create RAID.
    /dev/sd [bcd]1: Specify these 3 disk partitions to create RAID.
    -x: Specify several hard disks to be used as hot spare disks for RAID,
    x1 means to reserve 1 free hard disk as a spare /dev/sde1: Specify the disk to be used as a spare
    View the detailed information of RAID disks
    cat /proc/mdstat (also View progress)
    or mdadm -D /dev/md5
    use the watch command to refresh the output of /proc/mdstat at regular intervals
    watch -n 5 "cat /proc/ mdstat' to
    check whether the disk has been RAID
    mdadm -E /dev/sd [be]1
    Insert picture description here
    Insert picture description here
    Insert picture description here
    Insert picture description here
  5. Create and mount the file system
    mkfs -t xfs /dev/md5 (mkfs.xfs /dev/md5)
    -format mkdir /md5-create directory
    mount /dev/md5 /md5-manually mount
    df -Th- - View
    **Insert picture description here**
    automatically mount (Linux disk management and file system introduced here but to explain)
    cp / etc / fstab /etc/fstab.bak
    vim / etc / fstab
    / dev / MD5 - / MD5 - XFS - defaultsI ——0——0
  6. Simulate fault
    mdadm /dev/md5 -f /dev/sdb1——simulate /dev/sdb1 fault
    mdadm -D /dev/md5——check that sde1 has replaced sdb1, and sdb1 is in faulty state
    Insert picture description here
  7. Achieve failure recovery (method one)
    mdadm /dev/md5 -r /dev/sdb1——remove
    mdadm /dev/md5 -a /dev/sdb1——add
    at this time sdb1 is the backup spare state, indicating that the failure recovery
    Insert picture description here
    can achieve failure recovery (Method 2)
    Create the /etc/mdadm.conf configuration file to facilitate the management of software RAID configuration, such as starting and stopping
    echo'DEVICE /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1'> /etc/mdadm .conf
    mdadm --detail --scan >> /etc/mdadm.conf
    cat /etc/mdadm.conf
    umount /dev/md5
    mdadm -S /dev/md5
    mdadm -As /dev/md5
    Insert picture description here
    Insert picture description here
  8. Summary of other commonly used options of mdadm command
    -r: remove device
    -a: add device
    -S: stop
    -A: start
    mdadm dev/md5 -f /dev/sdb1 (simulated failure)
    mdadm dev/md5 -r /dev/sdb1( Remove)
    mdadm dev/md5 -a /dev/sdb1 (add)
    echo'DEVICE /dev/sdc1 /dev/sdb1 /dev/sdd1'> /etc/mdadm.conf
    mdadm --detail --scan >> /etc /mdadm.conf
    umount /dev /md5
    mdadm -S /dev/md5
    mdadm -As /dev/md5
    -s: refers to finding the configuration information in the /etc/mdadm.conf file.

4. Example: constructing a soft RAID 10 disk array

Steps of creating a soft RAID1 0 disk array: (first mirroring, then striping)

  • The first 3 steps are the same as RAID5, so I won’t describe them here.
  • 4. Create a RAID1 0 device
    mdadm -Cv /dev/md0 -l1 -n2 /dev/sd[bc]1
    mdadm -Cv /dev/md1 -l1 -n2 /dev/sd[de]1
    mdadm -Cv /dev/ md10 -l0 -n2 /dev/md0 /dev/md1
    Insert picture description here
    Insert picture description here
  • 5-7 is similar to RARD5, here is the text
  1. Format, create a directory and mount
  2. Simulated failure
  3. Achieve failure recovery

Guess you like

Origin blog.csdn.net/s15212790607/article/details/113360530