Reprint articles, RAID

mdadm command parsing

 

a current to MD (Multiple Devices) virtual block equipment in a way to achieve the RAID software linux system, using a plurality of virtual block device underlying a new virtual devices, and the use of striping (a stripping) techniques data block evenly distributed across multiple disks to improve read and write performance up virtual devices, using different algorithms offering data redundancy to protect user data will not be because of the failure of a piece of equipment and a complete loss, but also the equipment will be replaced recover lost data to the new device.
currently MD supports linear, multipath, raid0 (stripping) , raid1 (mirror), raid4, raid5, raid6, raid10 , such as different levels of redundancy and to grade the way, of course, can support more than RAID stack consisting of a display raid1 0, raid5 1 and other types of displays,
this paper explains how mdadm user-level RAID management software and use often encountered and solutions. now popular systems generally have a direct drive module MD compiled into the kernel or compiled into dynamically loadable driver module, we can start to see if the machine has been loaded MD drive or kernel by cat cat / proc / mdstat if the / proc / devices have md block device, and may be used lsmod see if MD modules may be loaded into the system.
[the root @ testggv ~] # CAT / proc / mdstat to
Personalities:
unused Devices:
[the root @ testggv ~] #
[the root ~ @testggv] CAT # / proc / Devices | grep MD
. 1 the ramdisk
. 9 MD
254 MDP
[testggv the root @ ~] #mdadm --version
[root @ testggv ~] # --version mdadm
mdadm - v2.5.4 - 13 October 2006
[root @ testggv ~] #
two, mdadm display management software raid
mdadm program is a standalone program that can do all the management software raid , there are seven kinds of usage patterns:
the Create
creates a new array using idle devices, each device having a metadata block
assemble
the original array of each block belonging to a device are assembled into an array
Build
an array do not need to create or assemble metadata each device does not have a metadata block
manage
management has storage array devices, such as increasing or hot spare disk is provided a disk fails, then remove the disk from the array
Misc
information of the array or modify associated equipment, such as querying the array or status information of the device
Grow
vary the capacity of the array or arrays for each device to be used in the device number
monitor
monitoring one or more arrays, the designated event reporting
if the MD driver is compiled into the kernel, the kernel calls when executed MD drive , the partition will automatically look for the FD (linux raid autodetect format the disk, so generally use fdisk With the HD or SD disk partition, and then set the disk FD.
[Root @ testggv ~] # fdisk / dev / HDC
at The Number The Cylinders of the this Disk for the SET IS to 25 232.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-25232, default 1):
Using default value 1
Last cylinder or size or sizeM or sizeK (1-25232, default 25232):
Using default value 25232
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): fd
The Changed System type of Partition. 1 to FD (the Linux RAID the autodetect)
the Command (m for Help): W
! Of The Partition Table has been Altered
the Calling the ioctl () to Re-Read Partition Table.
The WARNING: Re-Reading The Partition Table failed with error 16: Device or
BUSY.
of the Kernel Still uses the Old Table.
of the new new Table Will bE Used aT the Next reboot.
Syncing Disks.
[the root @ testggv ~] #

If MD drive is a module loaded as required by the user during system operation RAID level display control script starts running as software having instructions to start the RAID arrays in FedoraCore /etc/rc.d/rc.sysinit file system, if the RAID configuration file mdadm.conf present, check the profile is called the mdadm in the options, and then start the RAID array.
echo "raidautorun / dev / MD0" | Nash --quiet
IF [-f /etc/mdadm.conf]; the then
/ sbin / -s the mdadm -A
Fi -A: display means a load existing -s: lookup means mdadm.conf configuration file.
Manually stop plate Chen: -S / dev / md0 #mdadm

create a new display
mdadm use --create (or its abbreviation -C) display parameters to create new and important array of some identifying information as the metadata can be written in each of the a specific interval underlying device
--level (or its abbreviation -l) represents an array of RAID level
--chunk (or its abbreviation -C) indicates the size of each stripe unit, in KB, the default is 64KB, bar Configuring the belt unit size has a great influence on the performance of the array read and write at different load
--raid-devices (or its abbreviation -n) represents the number of active device array
--spare-devices (or the -X-acronym) represents the number of hot spare array, once a disk array fails, MD kernel driver using the hot spare disk is automatically added to the array, and then reconstruct the missing data to the spare disk on the disk.

Create a RAID 0 devices:
the mdadm --create / dev / --level MD0 = 32 = 0 --chunk --raid-Devices. 3 = / dev / sdb1 / dev / sdc1 / dev / SDD1
create a raid 1 apparatus:
the mdadm --create / dev / md0 --level = 1 --chunk = 128 --raid-devices = 2 --spare-devices = 1 / dev / sdb1 / dev / sdc1 / dev / sdd1
create a RAID5 device:
the mdadm - -create / dev / md0 --level = 5 --raid-devices = 5 / dev / sd [cg] 1 --spare-devices = 1 / dev / sdb1
create a RAID 10 equipment:
mdadm -C / dev / md0 -l10 -n6 / dev / sd [bg] -x1 / dev / sdh
create a RAID1 0 device:
the mdadm -C / dev / MD0 -l1 -n2 / dev / SDB / dev / SDC
the mdadm -C / dev / MD1 -l1 -n2 / dev / SDD / dev / SDE
the mdadm -C / dev / MD2 -l1 -n2 / dev / SDF / dev / SDG
the mdadm -C / dev / MD3 -l0 -N3 / dev / md0 / dev / md1 / dev / md2

initialize the length of time and disk arrays own performance and application load related to reading and writing, using cat / proc / mdstat information query the current RAID array reconstruction speed and the expected completion time.
CAT / proc / mdstat to
[the root @-FC5 the mdadm 2.6.3] CAT # / proc / mdstat to
Personalities: [RAID10]
MD0: Active RAID10 SDH [. 6] (S) SDG [. 5] SDF [. 4] SDE [. 3] SDD [2] SDC [. 1] SDB [0]
3145536 Blocks 2 near-64K copies of chunks are [6/6] [UUUUUU]
[===> ...........] the resync = 15.3% (483 072 / 3,145,536) = 0.3 min Speed = 120768K Finish / sec
unused Devices:
[@ FC5 the mdadm the root-2.6.3] # CAT / proc / mdstat to
Personalities: [RAID10]
MD0: Active RAID10 SDH [. 6] (S) SDG [. 5] SDF [. 4] SDE [. 3] SDD [2] SDC [. 1] SDB [0]
3,145,536 Blocks 64K of chunks are 2 near-copies [. 6 /. 6] [UUUUUU]
unused devices:
use display:
the MD apparatus can be directly read as a block device like an ordinary, a file system may be formatted.
mke2fs -j # / dev / md0
mkdir -p / mnt / md-the Test
#mount / dev / md0 / mnt / md the Test-
stop display running:
when the array is no file system or other advanced storage applications and devices used, --stop may be used (or its abbreviation -S) stop array; If the command returns the device or resource busy error type, instructions / dev / md0 upper application being used, can not be stopped temporarily, must first stop the application of the upper layer, so that but also to ensure the consistency of the data array.
[@ FC5 the mdadm the root-2.6.3] # ./mdadm --stop / dev / MD0
the mdadm: Array Fail to STOP / dev / MD0: Device Resource BUSY or
[@ FC5 the mdadm the root-2.6.3] # umount / dev / MD0
[@ FC5 the mdadm the root-2.6.3] #. / --stop the mdadm / dev / MD0
mdadm: stopped /dev/md02.3 assembly has created an array or a pattern --assemble abbreviation (-A) is to check the underlying device metadata information, and then assembled into active array. If we already know the array composed of those devices, you can specify the use of those devices to start the array. [@ FC5 the mdadm the root-2.6.3] # ./mdadm -A / dev / MD0 / dev / SD [BH]
the mdadm: / dev / MD0. 6 has been Started with Spare Drives. 1 and if the configuration file (/ etc. /mdadm.conf) using the command mdadm -As / dev / md0. DEVICE mdadm check information mdadm.conf, and then reads the metadata information from each device and check whether the same information ARRAY, if the information matches the array is started. If no /etc/mdadm.conf file, and do not know that an array of disks, you can use the Command --examine (or its abbreviation -E) detects whether the metadata information on the current array block device. [root @ FC5 mdadm-2.6.3] # ./mdadm -E / dev / SDI
mdadm:. No md Superblock Detected ON / dev / SDI
[root @ FC5 mdadm-2.6.3] # ./mdadm -E / dev / sdb
/ dev / sdb:
Magic: a92b4efc
Version: 00.90.00
UUID: 0cabc5e5: 842d4baa: e3f6261b: a17a477a
Creation Time: Sun Aug 22 17:49:53 1999
Raid Level: RAID10
Used Dev Size : 1048512 (1024.11 MiB 1073.68 MB)
Array Size : 3145536 (3.00 GiB 3.22 GB)
Raid Devices : 6
Total Devices : 7
Preferred Minor : 0
Update Time : Sun Aug 22 18:05:56 1999
State : clean
Active Devices : 6
Working Devices : 7
Failed Devices : 0
Spare Devices : 1
Checksum : 2f056516 - correct
Events : 0.4
Layout : near=2, far=1
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 8 16 0 active sync /dev/sdb
0 0 8 16 0 active sync /dev/sdb
1 1 8 32 1 active sync /dev/sdc
2 2 8 48 2 active sync /dev/sdd
3 3 8 64 3 active sync /dev/sde
4 4 8 80 4 active sync /dev/sdf
. 5. 5. 8 96. 5 Active Sync / dev / SDG
. 6. 6. 8 112. 6 Spare / dev / SDH
can be found in the array from the command above results, the device name uniquely identifies the UUID and the array contains, and then the above command to assemble arrays, also UUID may be used to identify array assembly. No consistent information device metadata (e.g., / dev / sda and / dev / sda1, etc.) the program will automatically skip the mdadm.

[@ FC5 the mdadm the root-2.6.3] # = 0cabc5e5 ./mdadm -Av --uuid: 842d4baa: e3f6261b: a17a477a
/ dev / MD0 / dev / SD *
the mdadm: Devices for looking for / dev / MD0
the mdadm: NO recogniseable Superblock ON / dev / sda
mdadm: / dev / sda has Wrong uuid.
mdadm: NO recogniseable Superblock ON / dev / sda1
mdadm: / dev / sda1 has Wrong uuid.
mdadm: NO RAID Superblock ON / dev / SDI
mdadm: / dev / SDI has Wrong uuid.
mdadm: / dev / sdi1 has Wrong uuid.
mdadm: NO RAID Superblock ON / dev / SDJ
mdadm: / dev / Wrong SDJ has uuid.
mdadm: /dev/sdj1 has wrong uuid.
mdadm: no RAID superblock on /dev/sdk
mdadm: /dev/sdk has wrong uuid.
mdadm: /dev/sdk1 has wrong uuid.
mdadm: /dev/sdb is identified as a member of /dev/md0, slot 0.
mdadm: /dev/sdc is identified as a member of /dev/md0, slot 1.
mdadm: /dev/sdd is identified as a member of /dev/md0, slot 2.
mdadm: /dev/sde is identified as a member of /dev/md0, slot 3.
mdadm: /dev/sdf is identified as a member of /dev/md0, slot 4.
mdadm: /dev/sdg is identified as a member of /dev/md0, slot 5.
mdadm: /dev/sdh is identified as a member of /dev/md0, slot 6.
mdadm: added /dev/sdc to /dev/md0 as 1
mdadm: added /dev/sdd to /dev/md0 as 2
the mdadm: added / dev / SDE to / dev / MD0 AS. 3
the mdadm: added / dev / SDF to / dev / MD0 AS. 4
the mdadm: added / dev / SDG to / dev / MD0 AS. 5
the mdadm: added / dev / SDH to / dev / md0 aS 6
mdadm: added / dev / sdb to / dev / md0 aS 0
mdadm: / dev / md0 has been Started with 6 and 1 Spare Drives.
profile:
/etc/mdadm.conf as the default configuration file the main role is to facilitate the tracking soft RAID configuration can be configured in particular monitoring and event reporting options. Assemble command may also be used --config (or its abbreviation -C) to specify the profile. We usually can stand the following command to build the profile #
#echo the DEVICE / dev / sdc1 / dev / sdb1 / dev / the SDD1> /etc/mdadm.conf
#mdadm --detail --scan >> /etc/mdadm.conf

use when profile start arrays, mdadm will query the device and the contents of the array configuration file, and then start to run all run RAID arrays. If the device name specified array, a corresponding array of only starting.
[root @ FC5 mdadm-2.6.3] # ./mdadm -As
mdadm: / dev / MD1 has been Started with 3 Drives.
mdadm: /dev/md0 has been started with 6 drives and 1 spare.
[root@fc5 mdadm-2.6.3]# cat /proc/mdstat
Personalities : [raid0] [raid10]
md0 : active raid10 sdb[0] sdh[6](S) sdg[5] sdf[4] sde[3] sdd[2] sdc[1]
3145536 blocks 64K chunks 2 near-copies [6/6] [UUUUUU]
md1 : active raid0 sdi1[0] sdk1[2] sdj1[1]
7337664 blocks 32k chunks
unused devices:
[root@fc5 mdadm-2.6.3]# ./mdadm -S /dev/md0 /dev/md1
mdadm: stopped /dev/md0
mdadm: stopped /dev/md1
[root@fc5 mdadm-2.6.3]# ./mdadm -As /dev/md0
mdadm: /dev/md0 has been started with 6 drives and 1 spare.
[root@fc5 mdadm-2.6.3]# cat /proc/mdstat
Personalities : [raid0] [raid10]
MD0: Active RAID10 SDB [0] SDH [. 6] (S) SDG [. 5] SDF [. 4] SDE [. 3] SDD [2] SDC [. 1]
3145536 Blocks 2 near-64K copies of chunks are [6/6] [UUUUUU ]
unused devices:
state 2.4 query array
we can see by cat / proc / mdstat information about the status of all RAID arrays running, in the first line first is the MD device name, active and inactive option indicates whether the array can read and write, It followed RAID level of the array, followed by a piece of equipment belonging to an array of square brackets [] in the digital representation of the device serial number in the array, (S) indicates it is a hot spare, (F) indicates that the disk is a faulty state. In the second row of the first size array, KB units, followed by the size of the chunk-size, then the layout type, different type of layout of the different RAID levels, [6/6] and [UUUUUU] represents an array of six disk and disc 6 are running, and [5/6] and [_UUUUU] represents an array of six disks 5 are operating normally, that position corresponding to the underlined disk is faulty state.
[@ FC5 the mdadm the root-2.6.3] # CAT / proc / mdstat to
Personalities: [RAID6] [RAID5] [RAID4] [RAID1]
MD0: Active RAID5 SDH [. 6] (S) SDG [. 5] SDF [. 4] SDE [. 3] SDD [2] SDC [. 1] SDB [0]
5.24256 million Blocks Level. 5, the chunk 64K, algorithm 2 [6/6] [UUUUUU]
unused Devices:
[root @ FC5 mdadm-2.6.3] # ./mdadm / dev / md0 -f / dev / SDH / dev / sdb
mdadm: the SET / dev / SDH faulty in / dev / md0
mdadm: the SET / dev / sdb faulty in / dev / MD0
[@ FC5 the mdadm the root-2.6.3] # CAT / proc / mdstat to
Personalities: [RAID6] [RAID5] [RAID4] [RAID1]
MD0: Active RAID5 SDH [. 6] (F.) SDG [. 5] SDF [. 4] SDE [. 3] SDD [2] SDC [. 1] SDB [. 7] (F.)
5.24256 million Blocks Level. 5, the chunk 64K, algorithm 2 [6/5] [_UUUUU]
unused Devices:
we can also see through the command mdadm brief information specified array (or abbreviated --query -Q is used) and detailed information (or an abbreviation used --detail -D) including RAID version details, time of creation, RAID level, the array capacity, available space, state number of devices, the super block status, update time, information of the UUID, the respective devices, RAID level type and layout algorithm and block size information. Device state information into active, sync, spare, faulty, rebuilding, removing the like.
root @ fc5 mdadm-2.6.3] # ./mdadm --query / dev / md0
/dev/md0: 2.100GiB raid10 6 devices, 1 spare. Use mdadm --detail for more detail.
[root@fc5 mdadm-2.6.3]# ./mdadm --detail /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Sun Aug 22 17:49:53 1999
Raid Level : raid10
Array Size : 3145536 (3.00 GiB 3.22 GB)
Used Dev Size : 1048512 (1024.11 MiB 1073.68 MB)
Raid Devices : 6
Total Devices : 7
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sun Aug 22 21:55:02 1999
State : clean
Active Devices : 6
Working Devices : 7
Failed Devices : 0
Spare Devices : 1
Layout : near=2, far=1
Chunk Size : 64K
The UUID: 0cabc5e5: 842d4baa: e3f6261b: a17a477a
Events: 0.122
Number The Major Minor RaidDevice State
0. 8 16 0 Active Sync / dev / SDB
. 1. 8 32. 1 Active Sync / dev / SDC
2. 8 48 2 Active Sync / dev / SDD
. 3. 8 64 . 3 Active Sync / dev / SDE
. 4. 8 80. 4 Active Sync / dev / SDF
. 5. 8 96. 5 Active Sync / dev / SDG
. 6. 8 112 - Spare / dev / SDH

management array
mdadm can in manage mode, the array of running add and delete disk. Used to identifying failed disk, increasing Spare (hot standby) disk, and removed from the disk array has failed, and the like. Use --fail (or its abbreviation -f) specified disk damage.
[@ FC5 the mdadm the root-2.6.3] # ./mdadm / dev / MD0 --fail / dev / SDB
the mdadm: SET / dev / faulty in SDB / dev / MD0
when the disk is damaged, using --remove (or the acronym - F) this parameter is the disk from the disk array; but if the device further array being used, it can not be removed from the array. [@ FC5 the mdadm the root-2.6.3] # ./mdadm / dev / MD0 --remove / dev / SDB
the mdadm: Hot removed / dev / SDB
[@ FC5 the mdadm the root-2.6.3] # ./mdadm / dev / MD0 --remove / dev / SDE
the mdadm: failed for Hot Remove / dev / SDE: Device Resource BUSY or
Spare disk arrays with if, then automatically Reconstruction of a damaged disk on the new data to the spare disk;
[@ FC5 the mdadm the root-2.6.3] # ./mdadm -f / dev / MD0 / dev / SDB; CAT / proc / mdstat to
the mdadm: SET / dev / SDB faulty in / dev / MD0
Personalities: [RAID0] [RAID10]
MD0: Active RAID10 SDH [. 6] SDB [. 7] (F.) SDC [0] SDG [. 5] SDF [. 4] SDE [. 3] SDD [2]
Blocks of chunks are 2 near 64K 3145536-copies [6/5] [U_UUUU]
[=======> ........] Recovery = 35.6% (373888/1048512) Finish = 0.1min Speed = 93472K / sec
unused Devices:
If there is no spare disk array, may be used --add (-a or its abbreviation) parameter to increase the hot spare disk
[root @ fc5 mdadm-2.6.3] # ./mdadm / dev / md0 --add / dev / SDH
the mdadm: added / dev / SDH
monitor array
Mdadm RAID array can be used to monitor the monitoring program regularly check the specified event has occurred, then to properly handle the configuration. For example, when there is a problem in the disk array device that can send e-mail to the administrator; or when a failed disk to disk automatically replaced by the callback procedure, all monitoring events can be logged to the system log. Mdadm currently supported events have RebuildStarted, RebuildNN (NN is 20, 40, 60, or 80), RebuildFinished, Fail, FailSpare, SpareActive, NewArray, DegradedArray, MoveSpare, SparesMissing, TestMessage.
If you configure every 300 seconds mdadm monitoring process queries MD device at a time, when the log file array error occurs, send mail to specified users, handle event processing program and recording events reported to the system. Use --daemonise parameter (or its abbreviation -f) make the program runs continuously in the background. If you want to send a message needs to run the sendmail program, as if the e-mail address is configured to an address outside the network should be able to send out the first test.
[@ FC5 the mdadm the root-2.6.3] # / --monitor --mail the mdadm --program = = the root @ localhost / the root / md.sh.
--syslog --delay = 300 / dev / --daemonise MD0
- --------------------
author: yuesichiu
source: CSDN
original: https: //blog.csdn.net/yuesichiu/article/details/8502680
copyright: This article is bloggers original article, reproduced, please attach Bowen link!

Guess you like

Origin www.cnblogs.com/chenming-1998/p/11073772.html