raid volume performance testing

Redundant Array of Independent Disks #RAID volume
RAID is a plurality of the separate drives (physical disk) are combined in different ways to form a disk group (logical drive), thereby providing a higher performance than a single disk storage and provision of data backup technology. Disk array composed of different ways to become RAID levels (RAID Levels).
Data backup function in the event of damage user data, use the corrupted data backup information can be restored in order to protect the security of user data. In the user looks, like a disk group consisting of a hard drive, users can partition it, format and so on. In short, the operation identical with a single hard disk array. The difference is, the disk array storage speed much higher than a single hard drive, and can provide automatic data backup.

Two features of RAID technology: First, the speed, the second is security.

Raid+ISCSI


RAID level fault tolerance maximum available capacity of the hard disk at least read write performance performance security purposes application industry
0 2 0 n n n a hard disk abnormalities, the pursuit of real-time rendering of 3D industry's largest capacity,

                                                                                                                      All hard drives will aberrant splicing speed video cache 
   
1 2 n-1 1 n 1 highest, a positive pursuit of the greatest individuals, enterprise backup
                                 often to security

5 3 1 n-1 n- 1 n-1 high-capacity individual pursuit of maximum, enterprise backup
                                            , the minimum budget

10 4 n / 2 n / 2 n n / 2 high security integrated RAID 0/1 advantages of large databases, server
                                            Theoretical faster


Check whether the disk is in the raid group.
# Mdadm --examine / dev / sdb / dev / sdc / dev / sdd / dev / sde

 


Start creating RAID0
# the mdadm -C / dev / MD0 -ayes -l0 -n2 / dev / SD [B, C]. 1
# CAT / proc / mdstat to
# the mdadm -D / dev / MD0
create /etc/mdadm.conf
# echo the DEVICE / dev / SD {B, C}. 1 >> /etc/mdadm.conf
# >> /etc/mdadm.conf the mdadm -Ds
# CAT /etc/mdadm.conf
the DEVICE / dev / sdb1 / dev / sdc1
ARRAY / dev / md0 level = raid0 num- devices = 2 UUID = 5160ea40: cb2b44f1: c650d2ef: 0db09fd0


Start creating RAID1
# the mdadm -C / dev / MD1 -ayes -l1 -n2 / dev / SD [D, E]. 1
added raid1 to modify the RAID configuration file and /etc/mdadm.conf
# echo DEVICE / dev / sd { b , C}. 1 >> /etc/mdadm.conf
# >> /etc/mdadm.conf the mdadm -Ds


Begin creating a RAID5
# the mdadm -C / dev / MD5 -ayes -L5 -N3 -X1 / dev / SD [F, G, H, I]. 1
added raid5 to modify the RAID configuration file and /etc/mdadm.conf
# echo DEVICE / dev / SD {F, G, H, I}. 1 >> /etc/mdadm.conf
# >> /etc/mdadm.conf the mdadm -Ds
# the mdadm / dev / MD5 -a / dev / sdh1 increase a backup # disk
# mdadm -G / dev / md5 -n4 # backup disk into the data disk
NOTE: increase the file system after the expansion
# resize2fs / dev / md5
modify the RAID configuration file /etc/mdadm.conf


创建raid10
# mdadm -C /dev/md1 -ayes -l1 -n2 /dev/sd[c-d]1
# mdadm -C /dev/md1 -ayes -l1 -n2 /dev/sd[e-f]1
# cat /proc/mdstat
# mdadm -C /dev/md10 -ayes -l0 -n2 /dev/md1 /dev/md2


Create RAID6
# the mdadm -C / dev / MD1 -ayes -l1 -n2 / dev / SD [BE]. 1
# the mdadm --add / dev / MD0 / dev / SDF-I increase the backup disk #
mdadm --create --auto = yes / dev / md0 --level = 5 --raid -devices = 4 --spare-devices = 1 / dev / sd [bf]


Parameter Description:
--create // pledged to create a RAID
--auto = yes / dev / md0 // newly created software RAID device is md0, md serial number can be 0-9
--level = // 5 disk array grade, is shown here to create a RAID5
--raid-Devices // add as a disk with a disk array blocks
--spare-devices // add the number of blocks as a preliminary (Spare) disk
/ dev / sd [bf] / equipment / disk arrays used can also be written as / dev / SDB / dev / SDC / dev / SDD / dev / SDE / dev / SDF
the mdadm -C / dev / MD0 Yes -L5 -N4 -X1 -a / dev / sd [bf] # this is another way of the above command. Four hard drives to create a RAID5
CAT / proc / mdstat # View the process of creating
mdadm --detail / dev / md0 # to view details
mdadm -Q / dev / md0 # view the overall summary information
mkfs.ext4 / dev / md0 # Create a file system
mkdir / mnt / raid5 # create the mount point
mount / dev / md0 / mnt / raid5 / # mount verification file system
mdadm --detail --scan >> /etc/mdadm.conf # UUID and extract the contents of the configuration file
vim / etc /mdadm.conf # must be so changed, or after restarting becomes md0 md127
ARRAY / dev / MD / Metadata = MD0 = 1.2 Spares name = localhost.localdomain. 1: 0 = 81ab08a0 the UUID: 3af1700c: 5bdd7f83: bf542889
Vim / etc / fstab # modify automatically mounted
mdadm --add / dev / md0 / dev / SDG
mdadm --manage / dev / md0 --add / dev / SDG # to add a new hard disk
mdadm --grow / dev / md0 -n5 # to add a new hard drive to raid5 the
cat / proc / mdstat # view the new hard disk SDG
resize2fs - f / dev / md0 # re-adjust the size of the file system
df -Th # to view the current status of the hard disk size
mdadm --manage / dev / md0 --fail / dev / sdd # emulates a hard disk failure
mdadm --detail / dev / md0 # View raid5 reconstruction process.
cat / proc / mdstat # View raid5 reconstructed process
ls / mnt / raid5 / # raid5 difference may still be available
mdadm --manage / dev / md0 --remove / dev / sdd # remove the damaged hard disk
mdadm --manage / dev / md0 --add / dev / sdg # add a new hard disk
mdadm --detail / dev / md0 # to view the current status of raid5

 

---------------- ---------------------- close RAID
umount / dev / md0
vim / etc / fstab Note # automatically linked in
vim /etc/mdadm.conf # Note profile
mdadm --stop / dev / md0 # stop raid
the mdadm --misc --zero-Superblock / dev / SD [BF] # released over a raid hard
RAID0 and solid with RAID1 to a RAID5
RAID0: concurrent read data
RAID1: mirror disk


**************** ISCSI storage server *****************
server: the underlying LVM + ISCSI service
client: ISCSI Log + LVM
server:
yum the install target SCSI--Y-utils
the fdisk -l
the fdisk -Cu / dev / SDB
partx -a / dev / SDB
the pvcreate / dev / sdb1 / dev / sdb2
the pvdisplay
PVS
the vgcreate vg00 / dev / sdb1
the vgdisplay
Vgs
the lvcreate 500M -n lv00 vg00 -L
the lvdisplay
LVS
yum the install iSCSI * -Y
yum the install Perl * -Y
Vim /etc/tgt/targets.conf
<target iqn.2016-08.com.example: lv00>
backing-Store / dev / vg00 / lv00
Initiator-address 192.168.56.201
</ target>
Note:
the target
named in the same subnet should be unique, named as the standard:
iqn.yyyy-mm <reversed domain name> . [: identifier]
wherein: iqn: indicates "iSCSI Qualified Name", referred iqn.
yyyy-mm: Represents the year - the month. Here is 2011-08.
reversed domain name: represents the reverse domain name, here is com.example.
identifier: represents the identification code, this is Disk
backing-Store specifies the storage device, generally refers to the non-real physical disks, such as LVM volume, a partition, raid arrays.
initiator-address assigned to the target address specified for use by clients.
Start /etc/init.d/tgtd
netstat -anput | grep 3260
iptables -F
/etc/init.d/iptables the Save
tgtadm --lld --op Show --mode iscsi target
when you create objects using ISCSI tgtadm tools, key commands the options are as follows:
-L, - LLD: Specifies the type of drive, such as "-L iscsi" represents iSCSI storage.
-o, - op: the specified operation type, such as "-o new" represents the creation, "- o delete" means to delete, "- o show" represents the view information.
-m, - mode: Specifies the management objectives, such as "-m target" represents the ISCSI target.
-t, - tid: ID of the specified object, such as the "-t 1" represents the first object.
-T, - targetname: Specify the name ISCSI target.


客户端:
yum install iscsi-initiator-utils lsscsi -y
/etc/init.d/iscsi start
iscsiadm -m discovery -t st -p 192.168.56.200
iscsiadm -m node -T iqn.2016-08.com.example:lv00 -p 192.168.56.200 -l
ll /dev/disk/by-path/
/etc/init.d/iscsi status
dmesg | tail
开机自动登录
# iscsiadm -m node –T iqn.1997-05.com.test:raid -p 192.168.1.1:3260 --op update -n node.startup -v
automatic
fdisk -cu /dev/sdb
partx -a /dev/sdb
pvcreate /dev/sdb1
vgcreate vg-data /dev/sdb1
lvcreate -L 200M -n lv-data vg-data
mkfs.ext4 /dev/vg-data/lv-data
mount /dev/vg-data/lv-data /data/
df -Th
echo "it is a test file" >> /data/test.txt
blkid
vim /etc/fstab

Expansion raid disk array:

Display composition of the disk array apparatus composed of four blocks,:

1
[root@svr /]# cat /proc/mdstat

The / dev / sdg array increase feed / dev / md0:

 

1
[root@svr /]#mdadm --add /dev/md0 /dev/sdg

Raid5 the array / dev / md0 five modified block device:

1
[root@svr /]#mdadm --grow/dev/md0 -n5

Redisplay about the composition of the disk array, and now / dev / md1 device is composed of 6 blocks, to complete expansion, but also requires 7.7 minutes.

1
[root@svr /]#cat /proc/mdstat

Wait for the completion of the expansion .....

Execute the following command:

1
[root@svr /]#resize2fs -f /dev/md0

Verify that the capacity has been expanded:

 

1
[root@svr /]#df -hT

 

Verify that the preliminary disk can work:

   Simulation in a RAID5 disk damage, test spare disk function (raid5 allowed a disk is destroyed, and that a spare disk we set immediately replace the damaged disk, RAID rebuild and protect the security of data):

First look at the current state of / dev / md0

 

1
2
[root@svr /]#mdadm --detail /dev/md0
[root@svr /]#cat /proc/mdstat

Use the following command to set the disk to become error status sdd

1
[root@svr ~]#mdadm --manage /dev/md0 --fail /dev/sdd

Then again look at the current state of / dev / md0

 

1
2
[root@svr /]#mdadm --detail /dev/md0
[root@svr /]#cat /proc/mdstat

RAID into the mounted directory raid5 directory, found raid can also be used, indicating that the disk ready to work.

1
2
3
[root@svr /]#cd /raid5
[root@svr /]#touch  1 .txt
[root@svr /]#

 

Delete the failed disk and insert a new disk:

First remove the damaged disk sdd, the command is as follows:

 

1
[root@svr raid5]#mdadm --manage /dev/md0    --remove/dev/sdd

Add a new disk as a spare disk, the command is as follows:

 

1
[root@svr raid5]#mdadm --manage /dev/md0 --add /dev/sdg

OK, look again execute the following command:

 

1
[root@svr /]#mdadm  --detail /dev/md0

 

Close software RAID methods:

When you no longer need that has been set up RAID, RAID can be turned off by the following methods:

1. Uninstall / dev / md0, and delete or comment out the / etc / fstab configuration file:

 

 

1
2
3
[root@svr ~]#umount /dev/md0
[root@svr ~]#vi /etc/fstab
#/dev/md0        /mnt/raid5        ext4        defaults             00

2, comment out or remove /etc/mdadm.conf settings

 

 

1
2
[root@svr ~]#vi /etc/mdadm.conf
#ARRAY /dev/md0 UUID=d58ed27d:00ce5cf5:b26ed1e9:879d0805

3, stop the raid equipment

 

1
[root@svr ~]#mdadm --stop/dev/md0

4) Delete all disks in raid

 

1
[root@svr ~]#mdadm --misc --zero-superblock /dev/sd[b-f]

At this time of the raid on the disk is deleted, there will not be restarted after the raid.

 

RAID0 and RAID1 at the same real-RAID5

RAID0: concurrent read data (striped)

RAID1: mirrored disk array (mirrored)

Guess you like

Origin www.cnblogs.com/klb561/p/11294199.html