Detailed RAID and RAID management software

RAID Interpretation:

RAID-- redundant array of disks (disk organized bundled together to use, improve IO rate, and provide redundant)

Hard RAID : RAID card (similar bios interface)

Soft RAID : Software simulation --mdadm

 

RAID type

0 the RAID : the plurality of physical disks (at least two) in series together by hardware or software, once the data is written to each physical disk, the IO rate is doubled, there is no backup , disk usage 100%    RAID0- --- banding pattern

. 1 the RAID : at least two discs to 2n fold, the IO careful consideration did not improve, the data includes a redundant effect, disk space usage rate 100% / n-    RAID1 --- mirror mode

. 5 the RAID : at least three fast disk, allowing up to a damaged disc, lifting IO while rate, provides data backup function, the process of providing parity, parity must be inserted in a different hard disk storage

. 6 the RAID : At least four discs, at most two disk damage, providing dual parity

10 RAID : RAID0 + RAID1   requires at least four disks (first provide IO rate then provides data backup function)

Do first two raid1 , then two raid1 made RAID0 (first mirror, and then strip)

01 RAID : RAID1 RAID0 +   requires at least four hard drives (to provide data backup and then provide IO rate function)

Do first two raid0 , then two raid0 made RAID1 (first band, and then mirror)

 

 

One. Creating raid10 soft raid made (before adding four disk, you can also do partition)

 

/ dev / sdb / dev / sdc / dev / sdd / dev / sde 4 block plate are 2G

 

#]yum  -y  install  mdadm

mdadm Option Notes

-a or --add : adding a device to an array

-C or --create : create a new array

-c or --chunk : block setting array chunk size, in units of KB

-l or --level : set the level of the disk array

-n or --raid =-Devices : Specifies the array member (partition / the number of disks)

-x or --spare-devicds = : Specifies the number of spare disk array

-G or --grow : changing the shape or size of the array

-D or --detail details printing array apparatus:

-s or --scan : the scan profile or / proc / mdstat obtained missing information array

-A : Activation disk array

-f : the device state as failure

-v : --verbose Show Details

-r : Remove Device

 

 

#]mdadm  -E  /dev/sd[b-e]   

 

( See this a few disk is done raid)

 

If super block is not detected, there is no means disposed raid

#]mdadm  -Cv  /dev/md0  -a  yes  -n  4  -l  10  /dev/sdb  /dev/sdc  /dev/sdd /dev/sde

#]ll  /dev/md0

brw-rw---- 1 root disk 9, 0 1231 21:37 /dev/md0

 

Do raid after formatting the file system to use

 

#]mkfs.ext4  /dev/md0

Create a directory to mount and mount to the directory, df -hT see mounting information

#]mkdir  /RAID

#]mount  /dev/md0  /RAID

#]df  -hT

 

 Viewing Array Information

#]mdadm  -D  /dev/md0

 

 

#] echo "/ dev / md0 / ext4 Defaults RAID 0 0" >> / etc / fstab   ( achieve power on automatically mount )

#]mount  -a

 

If the actual production environment, do soft raid and raid had damaged disk repair methods are as follows:

#]mdadm  /dev/md0  -f  /dev/sdb

#]mdadm  -D  /dev/md0

 

 

A damaged disc arrays does not affect the entire array, the disk can be restored again, new

#]mdadm  /dev/md0  -a  /dev/sdb

mdadm: Cannot open /dev/sdb: Device or resource busy 

(/ dev / sdb is in use, needs to be removed after the automatic loading and power re reboot / dev / sdb added to the raid in )

#]sed  -i  's&^/dev/md0&#/dev/md0&'  /etc/fstab

#]reboot

#]mdadm  /dev/md0  -a  /dev/sdb

mdadm: added /dev/sdb  /dev/sdb加入/devmd0成功)

#]mdadm  -D  /dev/md0

 

 

raid恢复后查看挂载信息并未发现设备/dev/md0和挂载目录,之前sed命令注释了,需要重新设置开机自动挂载并重新挂载

注意:raid重构后,可能会自动重新命名,并不是原来的raid消失只是名字变了,如果发生raid名字重命名了,挂载时修改相对应的名字即可(原因是dev设备管理器会自动对设备进行命名,rhel6不会改变设备名字,rhel7可能会改变)

#]df  -hT

#]sed  -i  ‘s&#/dev/md0&/dev/md0&’  /etc/fstab

#]mount  -a

#]df  -hT

 

 

停止raid

#]umount  /RAID

#]mdadm  -S  /dev/md0  

(-S选项停止使用raid,等同于删除raid,再去查看是没有/dev/md0这块设备的)

 

 

RAID卡支持热插拔,不需要重启,软RAID重构需要重启生效

 

 

 

二.创建raid5的软raid制作(先加四块盘,也可以用分区做)raid+备份盘

备份盘:存储奇偶校验码,用于恢复raidraid5机制)

/dev/sdb   /dev/sdc   /dev/sdd   /dev/sde   4块盘都是2G

 

#]mdadm  -Cv  /dev/md1  -a  yes  -l  5  -n  3  -x  1  /dev/sdb  /dev/sdc  /dev/sdd  /dev/sde

注意:不管主盘备份盘的选项放前还是放后,备份盘都是在主盘之后,只有创建了raid之后才有备份盘,和选项先后顺序无关,和磁盘先后顺序有关

#]mdadm  -Cv  /dev/md1  -a  yes  -l  5  -n  3  -x  1  /dev/sdb  /dev/sdc  /dev/sdd  /dev/sde

#]mdadm  -Cv  /dev/md1  -a  yes  -l  5  -x  1  -n  3  /dev/sdb  /dev/sdc  /dev/sdd  /dev/sde

(这两条命令的结果都是/dev/sdb  /dev/sdc  /dev/sdd三块做raid主盘,/dev/sde做备份盘)

 

 

#]mdadm  -D  /dev/md1

 

 

 

#]mkfs.ext4  /dev/md1

#]echo “/dev/md1  /RAID  ext4  defaults  0  0”  <<  /etc/fstab

#]mount  /dev/md1  /RAID

#]df  -hT

 

 

raid中剔除/dev/sdb后查看/dev/md1状态

#]mdadm  /dev/md1  -f  /dev/sdb

#]mdadm  -D  /dev/md1

 

 

去除开机自动挂载并重启,然后重构raid查看raid5的阵列变化

#]sed  -i  ‘s&^/dev/md1&#/dev/md1&’  /etc/fstab

#]reboot

#]mdadm  /dev/md1  -a  /dev/sdb

#]mdadm  -D  /dev/md1

 

 

重新挂载使用

#]sed  -i  ‘s&^#/dev/md1&/dev/md1&’  /etc/fstab

#]mount  /dev/md1  /RAID  (mount  -a)

#]df  -hT

 

 

三.创建raid01的软raid制作

/dev/sdb   /dev/sdc   /dev/sdd   /dev/sde   4块盘都是2G

#]mdadm  -Cv  /dev/md2 -a yes -n 2 -l 0 /dev/sdb /dev/sdc

#]mdadm  -D  /dev/md2

#]mdadm  -Cv  /dev/md3 -a yes -n 2 -l 0 /dev/sdd /dev/sde

#]mdadm  -D  /dev/md3

#]mdadm  -Cv  /dev/md4 -a yes -n 2 -l 1 /dev/md2 /dev/md3

#]mdadm  -D  /dev/md4

#]mdfs.ext4  /dev/md4

#]echo  “/dev/md4  /RAID  ext4  defaults  0  0”  <<  /etc/fstab

#]mount  /dev/md4  /RAID

#]df  -hT

Guess you like

Origin www.cnblogs.com/RXDXB/p/12128072.html