linux中raid 1-mdadm管理

1、raid1原理:

RAID-1 :mirroring(镜像卷)需要磁盘两块以上
原理:是把一个磁盘的数据镜像到另一个磁盘上,也就是说数据在写入一块磁盘的同时,会在另一块闲置的磁盘上生成镜像文件,(同步)
特性:当一块硬盘失效时,系统会忽略该硬盘,转而使用剩余的镜像盘读写数据,具备很好的磁盘冗余能力。

磁盘利用率为50%,即2块100G的磁盘构成RAID1只能提供100G的可用空间。

2、实验内容:

1.创建raid1
2.添加一个G的热备盘
3.模拟磁盘故障,自动顶替故障
4.卸载阵列

1)创建分区

[root@ localhost ~]# fdisk /dev/sdd
。。。。。。。。。。。。。。。。。。。。
[root@ localhost ~]# ll /dev/sdd*
brw-rw---- 1 root disk 8, 48 2020-02-28 02:05 /dev/sdd
brw-rw---- 1 root disk 8, 49 2020-02-28 02:05 /dev/sdd1
brw-rw---- 1 root disk 8, 50 2020-02-28 02:05 /dev/sdd2
brw-rw---- 1 root disk 8, 51 2020-02-28 02:05 /dev/sdd3
brw-rw---- 1 root disk 8, 52 2020-02-28 02:05 /dev/sdd4

2)创建raid 1

[root@ localhost ~]# mdadm -C -v /dev/md2 -l 1 -n 2 -x 1 /dev/sdd1 /dev/sdd[2,3]
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: size set to 5237760K
Continue creating array? y
mdadm: Fail create md2 when using /sys/module/md_mod/parameters/new_array
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.

3)将raid 1信息保存到配置文件中

[root@ localhost ~]# mdadm -Dsv > /etc/mdadm.conf

4)检查我们的磁盘阵列

[root@ localhost ~]# mdadm -D /dev/md2 
/dev/md2:
           Version : 1.2
     Creation Time : Fri Feb 28 02:06:53 2020
        Raid Level : raid1
        Array Size : 5237760 (5.00 GiB 5.36 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Fri Feb 28 02:07:20 2020
             State : clean 
    Active Devices : 2
   Working Devices : 3
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : unknown

              Name : localhost.localdomain:2  (local to host localhost.localdomain)
              UUID : 1412ba50:25d4bdec:cc633d33:a0a931cb
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       49        0      active sync   /dev/sdd1
       1       8       50        1      active sync   /dev/sdd2

       2       8       51        -      spare   /dev/sdd3

5)在raid设备上创建文件系统并挂载

[root@ localhost ~]# mkfs.xfs /dev/md2 
meta-data=/dev/md2               isize=512    agcount=4, agsize=327360 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1309440, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@ localhost ~]# mkdir /raid1
[root@ localhost ~]# mount /dev/md2 /raid1/
[root@ localhost ~]# df -h | tail -1
/dev/md2             5.0G   33M  5.0G   1% /raid1

6)创建测试文件,看如果一块磁盘坏掉,数据是否丢失

[root@ localhost ~]# cd /raid1/
[root@ localhost raid1]# touch a.txt
[root@ localhost raid1]# echo "磁盘坏了,我也在" >a.txt

7)模拟损坏(sdd1盘坏掉了)

[root@ localhost raid1]# mdadm /dev/md2 -f /dev/sdd1
mdadm: set /dev/sdd1 faulty in /dev/md2
[root@ localhost raid1]# mdadm -D /dev/md2 
/dev/md2:
           Version : 1.2
     Creation Time : Fri Feb 28 02:06:53 2020
        Raid Level : raid1
        Array Size : 5237760 (5.00 GiB 5.36 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 2
     Total Devices : 3
       Persistence : Superblock is persistent

       Update Time : Fri Feb 28 02:15:20 2020
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 1
     Spare Devices : 0

Consistency Policy : unknown

              Name : localhost.localdomain:2  (local to host localhost.localdomain)
              UUID : 1412ba50:25d4bdec:cc633d33:a0a931cb
            Events : 36

    Number   Major   Minor   RaidDevice State
       2       8       51        0      active sync   /dev/sdd3
       1       8       50        1      active sync   /dev/sdd2

       0       8       49        -      faulty   /dev/sdd1

# 测试移除之后的文件是否还存在
[root@ localhost raid1]# cat /raid1/a.txt
磁盘坏了,我也在

8)移除坏掉的设备同时再加一个备份盘

[root@ localhost raid1]# mdadm -r /dev/md2 /dev/sdd1
mdadm: hot removed /dev/sdd1 from /dev/md2
# 查看是否被移除
[root@ localhost raid1]# mdadm -D /dev/md2 
/dev/md2:
           Version : 1.2
     Creation Time : Fri Feb 28 02:06:53 2020
        Raid Level : raid1
        Array Size : 5237760 (5.00 GiB 5.36 GB)
     Used Dev Size : 5237760 (5.00 GiB 5.36 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Fri Feb 28 02:19:53 2020
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : unknown

              Name : localhost.localdomain:2  (local to host localhost.localdomain)
              UUID : 1412ba50:25d4bdec:cc633d33:a0a931cb
            Events : 37

    Number   Major   Minor   RaidDevice State
       2       8       51        0      active sync   /dev/sdd3
       1       8       50        1      active sync   /dev/sdd2

9)增加一块热备盘

[root@ localhost raid1]# mdadm -a /dev/md2 /dev/sdb
mdadm: added /dev/sdb

总结raid 1

  1. raid 1中一块硬盘坏了不影响raid正常运行
  2. 使用率是50%
发布了41 篇原创文章 · 获赞 35 · 访问量 3627

猜你喜欢

转载自blog.csdn.net/chen_jimo_c/article/details/104542411