RAID简介

RAID简介

RAID作用

  • 提高IO能力:磁盘并行读写;
  • 提高耐用性;磁盘冗余来实现

实现方式

  • 基于硬件的RAID技术和基于软件的RAID技术

常用RAID级别

RAID 0

RAID 0也称为条带模式(striped),即把连续的数据分散到多个磁盘上存取。当系统有数据请求就可以被多个磁盘并行的执行,每个磁盘执行属于它自己的那部分数据请求。

优点:读取和写入是在设备上并行完成的,读取和写入性能将会增加。

缺点:RAID 0没有数据冗余,如果驱动器出现故障,那么将无法恢复任何数据。

组建:1块硬盘或者以上

RAID 1

RAID 1又称为镜像(Mirroring),一个具有全冗余的模式。RAID 1可以用于两个或2xN个磁盘,并使用0块或更多的备用磁盘,每次写数据时会同时写入镜像盘。

优点:可靠性很高。

缺点:读取速度提升不明显,效容量减小到总容量的一半,同时这些磁盘的大小应该相等,否则总容量只具有最小磁盘的大小。

组建:至少2块硬盘

RAID 5

RAID 5可以理解为是RAID 0和RAID 1的折衷方案,但没有完全使用RAID 1镜像理念,而是使用了“奇偶校验信息”来作为数据恢复的方式。

优点:保障程度要比RAID 1低而磁盘空间利用率要比RAID 1高,数据读取和RAID 0 相近

缺点:只允许单盘故障,一盘出现故障得尽快处理。

组建:至少需要三块硬盘,可用容量(n-1)/n的总磁盘容量(n为磁盘数)

RAID 6

RAID 6是在RAID 5基础上扩展而来的。与RAID 5一样,数据和校验码都是被分成数据块然后分别存储到磁盘阵列的各个硬盘上。只是RAID 6中增加一块校验磁盘,用于备份分布在各个磁盘上的校验码,这样RAID 6磁盘阵列就允许两个磁盘同时出现故障

优点:raid6是再raid5的基础上为了加强数据保护而设计的。可允许损坏2块硬盘。

缺点:性能提升方面不明显

组建:至少4块硬盘,可用容量:C=(N-2)×D  C=可用容量 N=磁盘数量 D=单个磁盘容量。

RAID 10

raid 10是2快硬盘组成raid1,2组raid1z组成raid0

优点:兼顾安全性和速度。基础4盘的情况下,raid10允许对柜盘2块故障,随着硬盘数量的提示,容错量也会相对应提升。这是raid5无法做到的。

缺点:对盘的数量要求稍高,磁盘使用率为一半。

组建:至少4块硬盘

实现方式

  • 硬件实现方式:生产环境使用
  • 软件实现方式

CentOS上的软件RAID的实现


mdadm

mdadm是multiple devices admin的简称,它是Linux下的一款标准的软件 RAID 管理工具

语法

mdadm [mode] <raiddevice> [options] <component-devices>

mode

-C:创建
    -n #: 使用#个块设备来创建此RAID;
    -l #:指明要创建的RAID的级别;
    -a {yes|no}:自动创建目标RAID设备的设备文件;
    -c CHUNK_SIZE: 指明块大小;
    -x #: 指明空闲盘的个数;

-A:装配
-F:监控
-a: 添加磁盘
-r: 移除磁盘
-f: 标记指定磁盘为损坏;
-D:显示raid的详细信息
-S:停止md设备
cat /proc/mdstat :观察md的状态

<raiddevice> :/dev/md#

<component-devices>:任意块设备

实例

  • 创建一个可用空间为10G的RAID1设备,要求其chunk大小为128k,文件系统为ext4,有一个空闲盘,开机可自动挂载至/backup目录

    [root@localhost ~]#  fdisk /dev/sdb
    欢迎使用 fdisk (util-linux 2.23.2)。
    
    更改将停留在内存中,直到您决定将更改写入磁盘。
    使用写入命令前请三思。
    
    命令(输入 m 获取帮助):n
    Partition type:
       p   primary (0 primary, 0 extended, 4 free)
       e   extended
    Select (default p): e                       //新建一个扩展分区
    分区号 (1-4,默认 1):1
    起始 扇区 (2048-83886079,默认为 2048):
    将使用默认值 2048
    Last 扇区, +扇区 or +size{K,M,G} (2048-83886079,默认为 83886079):
    将使用默认值 83886079
    分区 1 已设置为 Extended 类型,大小设为 40 GiB
    
    命令(输入 m 获取帮助):n
    Partition type:
       p   primary (0 primary, 1 extended, 3 free)
       l   logical (numbered from 5)
    Select (default p): l                       //建4个大小为10G的逻辑分区
    添加逻辑分区 5
    起始 扇区 (4096-83886079,默认为 4096):
    将使用默认值 4096
    Last 扇区, +扇区 or +size{K,M,G} (4096-83886079,默认为 83886079):+10G
    分区 5 已设置为 Linux 类型,大小设为 10 GiB
    
    .....
    
    命令(输入 m 获取帮助): t                        //更改分区模式为fd
    分区号 (1,5-8,默认 8):8
    Hex 代码(输入 L 列出所有代码):fd
    已将分区“Linux”的类型更改为“Linux raid autodetect”
    
    ....
    
    命令(输入 m 获取帮助):p
    
    磁盘 /dev/sdb:42.9 GB, 42949672960 字节,83886080 个扇区
    Units = 扇区 of 1 * 512 = 512 bytes
    扇区大小(逻辑/物理):512 字节 / 512 字节
    I/O 大小(最小/最佳):512 字节 / 512 字节
    磁盘标签类型:dos
    磁盘标识符:0x668a8f98
    
       设备 Boot      Start         End      Blocks   Id  System
    /dev/sdb1            2048    83886079    41942016    5  Extended
    /dev/sdb5            4096    20975615    10485760   fd  Linux raid autodetect
    /dev/sdb6        20977664    41949183    10485760   fd  Linux raid autodetect
    /dev/sdb7        41951232    62922751    10485760   fd  Linux raid autodetect
    /dev/sdb8        62924800    83886079    10480640   fd  Linux raid autodetect
    
    [root@localhost ~]# mdadm -C /dev/md1 -l 1 -c 128 -n 2 /dev/sdb{5,6}            //创建raid 1
    mdadm: /dev/sdb5 appears to be part of a raid array:
           level=raid1 devices=4 ctime=Tue Mar 27 09:40:52 2018
    mdadm: Note: this array has metadata at the start and
        may not be suitable as a boot device.  If you plan to
        store '/boot' on this device please ensure that
        your boot-loader understands md/v1.x metadata, or use
        --metadata=0.90
    mdadm: /dev/sdb6 appears to be part of a raid array:
           level=raid1 devices=4 ctime=Tue Mar 27 09:40:52 2018
    Continue creating array? y
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md1 started.
    
    [root@localhost ~]# mkfs -t ext4 /dev/md1           //格式文件系统
    mke2fs 1.42.9 (28-Dec-2013)
    文件系统标签=
    OS type: Linux
    块大小=4096 (log=2)
    分块大小=4096 (log=2)
    Stride=0 blocks, Stripe width=0 blocks
    655360 inodes, 2618112 blocks
    130905 blocks (5.00%) reserved for the super user
    第一个数据块=0
    Maximum filesystem blocks=2151677952
    80 block groups
    32768 blocks per group, 32768 fragments per group
    8192 inodes per group
    Superblock backups stored on blocks: 
            32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
    
    Allocating group tables: 完成                            
    正在写入inode表: 完成                            
    Creating journal (32768 blocks): 完成
    Writing superblocks and filesystem accounting information: 完成
    
    [root@localhost ~]# mount /dev/md1 /backup          //挂载
    [root@localhost ~]# mount | grep md1
    /dev/md1 on /backup type ext4 (rw,relatime,seclabel,data=ordered)
    [root@localhost ~]# echo -e "/dev/md1\t/backup\text4\tdefaults\t0\t0" >> /etc/fstab
    
    [root@localhost ~]#  mdadm -D /dev/md1
    /dev/md1:
               Version : 1.2
         Creation Time : Tue Mar 27 10:11:29 2018
            Raid Level : raid1
            Array Size : 10477568 (9.99 GiB 10.73 GB)
         Used Dev Size : 10477568 (9.99 GiB 10.73 GB)
          Raid Devices : 2
         Total Devices : 2
           Persistence : Superblock is persistent
    
           Update Time : Tue Mar 27 10:12:21 2018
                 State : clean 
        Active Devices : 2
       Working Devices : 2
        Failed Devices : 0
         Spare Devices : 0
    
    Consistency Policy : unknown
    
                  Name : localhost.localdomain:1  (local to host localhost.localdomain)
                  UUID : 9b3f7737:b4d56c2a:4fd82763:905c60a7
                Events : 17
    
        Number   Major   Minor   RaidDevice State
           0       8       21        0      active sync   /dev/sdb5
           1       8       22        1      active sync   /dev/sdb6
    
  • 创建一个可用RAID5设备,要求其chunk大小为256k,文件系统为ext4,开机可自动挂载至/mydata目录;

    [root@localhost ~]# mdadm -C /dev/md2 -l 256 -n 3 /dev/sdb{5,6,7}   // 创建RAID5
    mdadm: invalid raid level: 256
    [root@localhost ~]# mdadm -C /dev/md2 -l 5 -c 256 -n 3 /dev/sdb{5,6,7}
    mdadm: /dev/sdb5 appears to be part of a raid array:
           level=raid1 devices=2 ctime=Tue Mar 27 10:11:29 2018
    mdadm: /dev/sdb6 appears to be part of a raid array:
           level=raid1 devices=2 ctime=Tue Mar 27 10:11:29 2018
    mdadm: /dev/sdb7 appears to be part of a raid array:
           level=raid1 devices=4 ctime=Tue Mar 27 09:40:52 2018
    Continue creating array? y
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md2 started.
    
    [root@localhost ~]# mkfs.ext4 /dev/md2          //格式化
    mke2fs 1.42.9 (28-Dec-2013)
    文件系统标签=
    OS type: Linux
    块大小=4096 (log=2)
    分块大小=4096 (log=2)
    Stride=64 blocks, Stripe width=128 blocks
    1310720 inodes, 5238784 blocks
    261939 blocks (5.00%) reserved for the super user
    第一个数据块=0
    Maximum filesystem blocks=2153775104
    160 block groups
    32768 blocks per group, 32768 fragments per group
    8192 inodes per group
    Superblock backups stored on blocks: 
            32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
            4096000
    Allocating group tables: 完成                            
    正在写入inode表: 完成                            
    Creating journal (32768 blocks): 完成
    Writing superblocks and filesystem accounting information: 完成 
    
    [root@localhost ~]# watch -n1 cat /proc/mdstat      //看状态,用watch命令,每秒刷新一次
    Personalities : [raid1] [raid6] [raid5] [raid4] 
    md2 : active raid5 sdb7[3] sdb6[1] sdb5[0]
          20955136 blocks super 1.2 level 5, 256k chunk, algorithm 2 [3/3] [UUU]
    
    unused devices: <none>
    
    [root@localhost ~]# mount /dev/md2 /mydata  //挂载
    [root@localhost ~]# mount | grep md2
    /dev/md2 on /mydata type ext4 (rw,relatime,seclabel,stripe=128,data=ordered) 
    [root@localhost ~]#echo -e "/dev/md2\t/mydata\text4\tdefaults\t0\t0" >>/etc/fstab
    
    [root@localhost mydata]# df -h | grep mydata        //查看容量
    /dev/md2              20G   45M   19G    1% /mydata
    
  • 创建一个可用空间为20G的RAID10设备,要求其chunk大小为256k,文件系统为ext4,开机可自动挂载至/mydata目录;

    [root@localhost ~]# fdisk -l /dev/sdb
    
    Disk /dev/sdb: 107.4 GB, 107374182400 bytes, 209715200 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk label type: dos
    Disk identifier: 0xa0895273
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdb1            2048   209715199   104856576    5  Extended
    /dev/sdb5            4096    20975615    10485760   fd  Linux raid autodetect
    /dev/sdb6        20977664    41949183    10485760   fd  Linux raid autodetect
    /dev/sdb7        41951232    62922751    10485760   fd  Linux raid autodetect
    /dev/sdb8        62924800    83896319    10485760   fd  Linux raid autodetect
    
    [root@localhost ~]# mdadm -C /dev/md0 -l 10 -c 256 -n4 /dev/sdb[5,6,7,8]
    mdadm: Defaulting to version 1.2 metadata
    mdadm: array /dev/md0 started.
    
    [root@localhost ~]# watch -n1  cat /proc/mdstat
          [===================>.]  resync = 98.9% (20737664/20955136) finish=0.0min speed=202088
    K/sec
    Every 1.0s: cat /proc/mdstat                                        Wed Mar 28 07:25:33 2018
    
    Personalities : [raid10]
    md0 : active raid10 sdb8[3] sdb7[2] sdb6[1] sdb5[0]
          20955136 blocks super 1.2 256K chunks 2 near-copies [4/4] [UUUU]
    
    [root@localhost ~]# mdadm -D /dev/md0
    /dev/md0:
               Version : 1.2
         Creation Time : Wed Mar 28 07:23:37 2018
            Raid Level : raid10
            Array Size : 20955136 (19.98 GiB 21.46 GB)
         Used Dev Size : 10477568 (9.99 GiB 10.73 GB)
          Raid Devices : 4
         Total Devices : 4
           Persistence : Superblock is persistent
    
           Update Time : Wed Mar 28 07:25:22 2018
                 State : clean 
        Active Devices : 4
       Working Devices : 4
        Failed Devices : 0
         Spare Devices : 0
    
                Layout : near=2
            Chunk Size : 256K
    
    Consistency Policy : unknown
    
                  Name : localhost.localdomain:0  (local to host localhost.localdomain)
                  UUID : 86607053:fd36d282:85e68c5e:ef1444c0
                Events : 17
    
        Number   Major   Minor   RaidDevice State
           0       8       21        0      active sync set-A   /dev/sdb5
           1       8       22        1      active sync set-B   /dev/sdb6
           2       8       23        2      active sync set-A   /dev/sdb7
           3       8       24        3      active sync set-B   /dev/sdb8
    
    unused devices: <none>
    
    [root@localhost ~]# mkfs.ext4 /dev/md0
    [root@localhost ~]# mount /dev/md0 /mydata
    [root@localhost ~]# echo -e "/dev/md0\t/mydata\text4\tdefaults\t0\t0" >>/etc/fstab
    
    [root@localhost ~]# df -h
    Filesystem           Size  Used Avail Use% Mounted on
    /dev/mapper/cl-root   50G  8.4G   42G  17% /
    devtmpfs             478M     0  478M   0% /dev
    tmpfs                489M     0  489M   0% /dev/shm
    tmpfs                489M  6.7M  482M   2% /run
    tmpfs                489M     0  489M   0% /sys/fs/cgroup
    /dev/sda1           1014M  139M  876M  14% /boot
    /dev/md0              20G   45M   19G   1% /mydata
    /dev/mapper/cl-home   47G   33M   47G   1% /home
    tmpfs                 98M     0   98M   0% /run/user/0
    
  • 停止RAID,先取消挂载

    [root@localhost ~]# umount /dev/md2
    [root@localhost ~]# mdadm -S /dev/md2
    mdadm: stopped /dev/md1
    [root@localhost mydata]# mdadm -D /dev/md2
    mdadm: cannot open /dev/md2: No such file or directory
    
  • 重新装配

    [root@localhost mydata]# mdadm -A /dev/md2 /dev/sdb[5,6,7]
    mdadm: /dev/md2 has been started with 3 drives.
    
    [root@localhost mydata]# mdadm -D /dev/md2
    /dev/md2:
               Version : 1.2
         Creation Time : Tue Mar 27 10:24:24 2018
            Raid Level : raid5
            Array Size : 20955136 (19.98 GiB 21.46 GB)
         Used Dev Size : 10477568 (9.99 GiB 10.73 GB)
          Raid Devices : 3
         Total Devices : 3
           Persistence : Superblock is persistent
    
           Update Time : Tue Mar 27 11:48:10 2018
                 State : clean 
        Active Devices : 3
       Working Devices : 3
        Failed Devices : 0
         Spare Devices : 0
    
                Layout : left-symmetric
            Chunk Size : 256K
    
    Consistency Policy : unknown
    
                  Name : localhost.localdomain:2  (local to host localhost.localdomain)
                  UUID : f7471726:accc58ee:b83789df:131c0d68
                Events : 20
    
        Number   Major   Minor   RaidDevice State
           0       8       21        0      active sync   /dev/sdb5
           1       8       22        1      active sync   /dev/sdb6
           3       8       23        2      active sync   /dev/sdb7
    
  • 模拟损坏

    [root@localhost mydata]# mdadm /dev/md2 -f /dev/sdb7
    mdadm: set /dev/sdb7 faulty in /dev/md2
    
    [root@localhost mydata]# mdadm -D /dev/md2
    /dev/md2:
               Version : 1.2
         Creation Time : Tue Mar 27 10:24:24 2018
            Raid Level : raid5
            Array Size : 20955136 (19.98 GiB 21.46 GB)
         Used Dev Size : 10477568 (9.99 GiB 10.73 GB)
          Raid Devices : 3
         Total Devices : 3
           Persistence : Superblock is persistent
    
           Update Time : Tue Mar 27 13:35:52 2018
                 State : clean, degraded 
        Active Devices : 2
       Working Devices : 2
        Failed Devices : 1
         Spare Devices : 0
    
                Layout : left-symmetric
            Chunk Size : 256K
    
    Consistency Policy : unknown
    
                  Name : localhost.localdomain:2  (local to host localhost.localdomain)
                  UUID : f7471726:accc58ee:b83789df:131c0d68
                Events : 22
    
        Number   Major   Minor   RaidDevice State
           0       8       21        0      active sync   /dev/sdb5
           1       8       22        1      active sync   /dev/sdb6
           -       0        0        2      removed
    
           3       8       23        -      faulty   /dev/sdb7
    
  • 删除损坏的磁盘

    [root@localhost mydata]# mdadm /dev/md2 -r /dev/sdb7
    
    [root@localhost mydata]# mdadm -D /dev/md2
    /dev/md2:
               Version : 1.2
         Creation Time : Tue Mar 27 10:24:24 2018
            Raid Level : raid5
            Array Size : 20955136 (19.98 GiB 21.46 GB)
         Used Dev Size : 10477568 (9.99 GiB 10.73 GB)
          Raid Devices : 3
         Total Devices : 2
           Persistence : Superblock is persistent
    
           Update Time : Tue Mar 27 13:38:48 2018
                 State : clean, degraded 
        Active Devices : 2
       Working Devices : 2
        Failed Devices : 0
         Spare Devices : 0
    
                Layout : left-symmetric
            Chunk Size : 256K
    
    Consistency Policy : unknown
    
                  Name : localhost.localdomain:2  (local to host localhost.localdomain)
                  UUID : f7471726:accc58ee:b83789df:131c0d68
                Events : 25
    
        Number   Major   Minor   RaidDevice State
           0       8       21        0      active sync   /dev/sdb5
           1       8       22        1      active sync   /dev/sdb6
           -       0        0        2      removed
    
  • 添加新的硬盘,新硬盘要重新装配

    [root@localhost mydata]# mdadm /dev/md2 -a /dev/sdb7
    mdadm: added /dev/sdb7
    
    [root@localhost mydata]# watch -n 1 cat /proc/mdstat
    Every 1.0s: cat /proc/mdstat                                                                                                                   Tue Mar 27 13:45:50 2018
    
    Personalities : [raid1] [raid6] [raid5] [raid4]
    md2 : active raid5 sdb7[3] sdb5[0] sdb6[1]
          20955136 blocks super 1.2 level 5, 256k chunk, algorithm 2 [3/2] [UU_]
          [======>..............]  recovery = 34.4% (3608832/10477568) finish=0.5min speed=212284K/sec
    
    unused devices: <none>
    [root@localhost mydata]# mdadm -D /dev/md2
    /dev/md2:
               Version : 1.2
         Creation Time : Tue Mar 27 10:24:24 2018
            Raid Level : raid5
            Array Size : 20955136 (19.98 GiB 21.46 GB)
         Used Dev Size : 10477568 (9.99 GiB 10.73 GB)
          Raid Devices : 3
         Total Devices : 3
           Persistence : Superblock is persistent
    
           Update Time : Tue Mar 27 13:46:25 2018
                 State : clean 
        Active Devices : 3
       Working Devices : 3
        Failed Devices : 0
         Spare Devices : 0
    
                Layout : left-symmetric
            Chunk Size : 256K
    
    Consistency Policy : unknown
    
                  Name : localhost.localdomain:2  (local to host localhost.localdomain)
                  UUID : f7471726:accc58ee:b83789df:131c0d68
                Events : 66
    
        Number   Major   Minor   RaidDevice State
           0       8       21        0      active sync   /dev/sdb5
           1       8       22        1      active sync   /dev/sdb6
           3       8       23        2      active sync   /dev/sdb7
    
  • 删除元数据

    [root@localhost mydata]# mdadm --misc --zero-superblock /dev/sdb5
    [root@localhost mydata]# mdadm -E /dev/sdb5
    mdadm: No md superblock detected on /dev/sdb5.

猜你喜欢

转载自blog.csdn.net/eighteenxu/article/details/79718225
今日推荐