Linux 软RAID---mdadm

 

.RAID 建立

 

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb{1,2}

 

mdadm --create /dev/md0 --level=5 --raid-devices=2 /dev/sdb{1,2}

 

添加热备 :

mdadm --create /dev/md0 --add /dev/sdb3

 

创建 raid10

# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb{1,2}

# mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb{3,4} -a yes

# mdadm --create /dev/md2 --level=10 --raid-devices=2 /dev/md{0,1} -a yes

 

 

. /proc/mdstat

[root@station20 ~]# cat /proc/mdstat

Personalities : [raid1] [raid6] [raid5] [raid4]

md1 : active raid5 sdb10[3] sdb9[1] sdb8[0]

      196224 blocks level 5, 64k chunk, algorithm 2 [3/2] [UU_]

      [===============>.....]  recovery = 79.1% (78592/98112) finish=0.0min speed=4136K/sec

     

md0 : active raid1 sdb7[1] sdb6[0]

      98112 blocks [2/2] [UU]

      bitmap: 0/12 pages [0KB], 4KB chunk

 

unused devices: <none>

 

 

. 位图 : --bitmap=internal

原理描述:

mdadm 操作中, bitmap 用于记录 RAID 阵列从上次同步之后更改的部分,即记录 RAID 阵列有多少个块已经同步 (resync) RAID 阵列会定期将信息写入到 bitmap 中。在一般情况下,磁盘阵列在重启之后会有一个完整的同步过程。如果有 bitmap ,那么只有被修改后的数据才会被同步。在另一种情况下,如果阵列一块磁盘被取出, bitmap 不会被清除,当这块磁盘重新加入阵列时,同样只会同步改变过的数据。所以 bitmap 能够减少磁盘阵列同步的时间,起到优化 raid1 的作用。 Bitmap 一般写入的位置是磁盘的 metadata 或者我们成为外部的另外,要注意的是, bitmap 只是对 raid1 的功能,对 raid0 等其他设备来说是没有意义的 .

注意 : 此功能只对 RAID1 有效

Example: mdadm --create /dev/md0 --level=1 --raid-devices=2 -a yes -b internal

 

在已存在的 RAID1 上启用视图

mdadm --grow /dev/md0 --bitmap=internal

 

 

. 共享热备盘及邮件通知

 

[root@server109 ~]# mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda{5,6} -a yes -b internal

[root@server109 ~]# mdadm --create /dev/md1 --level=5 --raid-devices=3 /dev/sda{7,8,9} -a yes

[root@server109 ~]# mdadm /dev/md0 --add /dev/sda10

[root@server109 ~]# cat /proc/mdstat

Personalities : [raid1] [raid6] [raid5] [raid4]

md1 : active raid5 sda9[2] sda8[1] sda7[0]

      196736 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

     

md0 : active raid1 sda10[2](S) sda6[1] sda5[0]

      98368 blocks [2/2] [UU]

      bitmap: 0/13 pages [0KB], 4KB chunk

unused devices: <none>

 

[root@server109 ~]# mdadm --examine --scan > /etc/mdadm.conf

[root@server109 ~]# cat /etc/mdadm.conf

ARRAY /dev/md1 level=raid5 num-devices=3 UUID=891d6352:a0a4efff:4f162d90:c3500453

ARRAY /dev/md0 level=raid1 num-devices=2 UUID=b070e059:fe2cf975:aac92394:e103a46d

   spares=1

 

可以实现热备共享及邮件通知的配置文件如下 ( 直接修改 mdadm.conf):

[root@server109 ~]# cat /etc/mdadm.conf

## Share Host Spares

ARRAY /dev/md1 level=raid5 num-devices=3 UUID=891d6352:a0a4efff:4f162d90:c3500453 spare-group=1

ARRAY /dev/md0 level=raid1 num-devices=2 UUID=b070e059:fe2cf975:aac92394:e103a46d spare-group=1

   spares=1

 

## Mail Notification

MAILFROM root@localhost             ## 邮件发出 , 不写默认为 root

MAILADDR raider@localhost        ## 邮件接收

 

[root@server109 ~]# /etc/init.d/mdmonitor start

[root@server109 ~]# useradd raider

[root@server109 ~]# echo redhat | passwd --stdin raider

[root@server109 ~]# cat /proc/mdstat

Personalities : [raid1] [raid6] [raid5] [raid4]

md1 : active raid5 sda9[2] sda8[1] sda7[0]

      196736 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]

     

md0 : active raid1 sda10[2](S) sda6[1] sda5[0]

      98368 blocks [2/2] [UU]

      bitmap: 0/13 pages [0KB], 4KB chunk

 

unused devices: <none>

[root@server109 ~]# mdadm /dev/md1 -f /dev/sda7 -r /dev/sda7

mdadm: set /dev/sda7 faulty in /dev/md1

mdadm: hot removed /dev/sda7

[root@server109 ~]# cat /proc/mdstat

Personalities : [raid1] [raid6] [raid5] [raid4]

md1 : active raid5 sda10[3] sda9[2] sda8[1]

      196736 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]

      [====>................]  recovery = 24.7% (25472/98368) finish=0.1min speed=6368K/sec

     

md0 : active raid1 sda6[1] sda5[0]

      98368 blocks [2/2] [UU]

      bitmap: 0/13 pages [0KB], 4KB chunk

 

[root@server109 ~]# mail -u raider

Mail version 8.1 6/6/93.  Type ? for help.

"/var/mail/raider": 1 message 1 new

>N  1 [email protected]  Tue Jan  4 04:28  35/1262  "Fail event on /dev/md1:server109.example.com"

& 1

Message 1:

From [email protected]  Tue Jan  4 04:28:48 2011

Date: Tue, 4 Jan 2011 04:28:47 +0100

From: [email protected]

To: [email protected]

Subject: Fail event on /dev/md1:server109.example.com

 

.........................

A Fail event had been detected on md device /dev/md1.

 

It could be related to component device /dev/sda7.

 

...............................

 

 

.RAID 扩展 --grow

如果某天 RAID 空间不够用了 , 如何增加 RAID 的空间呢 ?

 

The steps for adding a new disk are:

1. Add the new disk to the active 3-device RAIDS (starts as a spare):

mdadm -add /d ev /mdO /d ev /hda8

2 Reshape the RAID 5 :

mdadm --grow /dev/md0 --raid-devices=4

3. Monitor the reshaping process and estimated time to fi ni sh:

watch -n 1 'cat /proc/mdstat'

4. Expand the FS to fill the new space:

resize2fs /dev/md0

 

[root@server109 ~]# mdadm /dev/md1 --add /dev/sda1 1

[root@server109 ~]# mdadm /dev/md1 --grow --raid-devices= 5

[root@server109 ~]# cat /proc/mdstat

Personalities : [raid1] [raid6] [raid5] [raid4]

md1 : active raid5 sda11[4] sda7[3] sda10[0] sda9[2] sda8[1]

      295104 blocks super 0.91 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]

      [>....................]  reshape =  2.0% (2812/98368) finish=1.0min speed=1406K/sec

     

md0 : active raid1 sda6[1] sda5[0]

      98368 blocks [2/2] [UU]

      bitmap: 0/13 pages [0KB], 4KB chunk

 

unused devices: <none>

 

[root@server109 ~]# resize2fs /dev/md1

 

 

. RAID 恢复

OS 为独立硬盘 , 数据存储在一个 RAID5 , 重新安装 OS 后如何恢复 RAID ?

 

[root@server109 ~]# mdadm --examine /dev/sda8

/dev/sda8:

          Magic : a92b4efc

        Version : 0.90.00

           UUID : 891d6352:a0a4efff:4f162d90:c3500453

  Creation Time : Tue Jan  4 04:18:45 2011

     Raid Level : raid5

  Used Dev Size : 98368 (96.08 MiB 100.73 MB)

     Array Size : 393472 (384.31 MiB 402.92 MB)

   Raid Devices : 5

  Total Devices : 5

Preferred Minor : 1

 

    Update Time : Tue Jan  4 05:17:52 2011

          State : clean

  Active Devices : 5

Working Devices : 5

  Failed Devices : 0

  Spare Devices : 0

       Checksum : 7f9b882a - correct

         Events : 206

 

         Layout : left-symmetric

     Chunk Size : 64K

 

      Number   Major   Minor   RaidDevice State

this     1       8        8        1      active sync   /dev/sda8

 

   0     0       8       10        0      active sync   /dev/sda10

   1     1       8        8        1      active sync   /dev/sda8

   2     2       8        9        2      active sync   /dev/sda9

   3     3       8        7        3      active sync   /dev/sda7

   4     4       8       11        4      active sync   /dev/sda11

 

[root@server109 ~]# mdadm -A /dev/md1 /dev/sda{7,8,9,10,11}

mdadm: /dev/md1 has been started with 5 drives.

[root@server109 ~]# mount /dev/md1 /mnt/

 

.Raid 重命名

rename / dev / md 1 to / dev / md3

 

[root@server109 ~]# umount /mnt/

[root@server109 ~]# mdadm --stop /dev/md1

[root@server109 ~]# mdadm --assemble /dev/md3 --super-minor=1 --update=super-minor /dev/sda{7,8,9,10,11}

说明 : --super-minor=1  这里的 "1" /dev/md1 一至 , 如果重命令的是 /dev/md0, 那这里就是 --super-minor=0

猜你喜欢

转载自emcome.iteye.com/blog/860078