Outline
In order to meet the needs in terms of performance and redundancy, LVM support the following three Logic Volume:
- Linear Logic Volume - linear logical volume
- Striped Logic Volume - striped logical volumes
- Mirror Logic Volume - mirrored logical volumes
Linear Logic Volume
We use lvcreate default order is created out of the linear logical, linear logical volume of PE can come from a PV, also from multiple PV, under normal circumstances is allocated first to start a PE PV, if the PV of PE It has been assigned over, and then assigned sequentially from the second PV, PV in the third. You can let Linear LV segments designated by the number of PV even PE PE dispersed to each PV, PV but if one is broken, then the Linear LV may also can not be used. The size can be specified Linear LV size directly -L, PE may be used to specify the number of allocated -l. When data is written to the Linear LV, first to PE first PV in writing, on the first space until the PV allocation runs out will write data to second PV.
Linear LV can only meet the demand elasticity distribution can not meet the performance and redundancy requirements, is the most common volume, but can also be switched to Linear LV Mirror LV by lvconvert order to provide redundancy.
-
root@hunk-virtual-machine:/home/hunk # lvcreate -l 100 -n linearlv VolGroup1 /dev/sdc:1280-1305 /dev/sdd:1280-1
-
305 /dev/sde:1280-1305 /dev/sdf:1280-1305
Striped Logic Volume
Striped LV underlying storage layout similar to RAID0, which is across a plurality of PV, in particular with a specified number of PV span -i, but certainly not the number of the VG exceeds PV, Striped LV maximum size depends on the remaining PE least the PV.
Striping means that each participating Striping PV divided into the size of the chunk and the like (also called stripe unit), these same location each PV chunk together form a stripe. For example, this chart below (from RedHat6 official documents), contains three PV, then 1,2,3 three chunk red logo on the formation of stripe1,4,5,6 composition stripe2. The chunk size can be specified or -I --stripesize, but can not exceed the size of the PE.
For example, when writing data to Striped LV, data is divided into the size of the chunk and the like, and then writing these PV chunk of these sequences. In this case there will be a plurality of disk drive underlying concurrent processing I / O requests, the polymerization can be multiplied to obtain I / O performance. Or below this figure, if there are a data block to be written 4M 512K LV, stripesize set sections, LVM chunk cut it into eight, respectively denoted as chunk1, chunk2 ..., the order of the chunk is written as follows PV :
- chunk1 written to PV1
- chunk2 written to PV2
- chunk3 written to PV3
- chunk4 written to PV1
- ...
Because LVM can not determine whether a plurality of Physics Volume Disk from the same underlying, if a plurality of Physics Volume Striped LV actually using different partitions on the same physical disk, will lead to a data block is cut into a plurality of division multiple chunk once issued with a disk drive, this case actually Striped LV does not improve performance, but degrade performance. Therefore, the nature of Striped LV improve I / O performance of the disk drive is to make parallel processing more underlying I / O request, rather than the I / O to the plurality of PV dispersed on the surface.
Striped LV mainly to meet performance requirements, do not have any redundancy, so there is no fault tolerance, if a single disk is damaged, it will cause data corruption.
root@hunk-virtual-machine:/home# lvcreate -L 20G --stripes 4 --stripesize 256 --name stripevol VolGroup1
Mirror Logic Volume
Mirror LV is made between the individual PV redundancy, like RAID1, specify the number of redundancy -m. Mirror LV provide redundancy, can effectively solve the problem of disk single point of failure, but the performance did not help. Linear LV and Mirror LV directly with each other lvconvert tools to switch, Mirror LV can also change the number of redundant after creating the specific usage please refer to the man page.
-
root@hunk-virtual-machine:/home/hunk # lvcreate -l 100 -m1 -n mirrorvol VolGroup1
-
Logical volume "mirrorvol" created.
-
root@hunk-virtual-machine:/home/hunk # lvdisplay /dev/VolGroup1/mirrorvol -m
-
--- Logical volume ---
-
LV Path /dev/VolGroup1/mirrorvol
-
LV Name mirrorvol
-
VG Name VolGroup1
-
LV UUID YxgfYi-c7nK-wk4v-rlu1-vRdh-MTMb-uVfl2v
-
LV Write Access read/write
-
LV Creation host, time hunk-virtual-machine, 2018-11-29 01:39:44 +0800
-
LV Status available
-
# open 0
-
LV Size 400.00 MiB
-
Current LE 100
-
Mirrored volumes 2
-
Segments 1
-
Allocation inherit
-
Read ahead sectors auto
-
- currently set to 256
-
Block device 252:8
-
-
--- Segments ---
-
Logical extents 0 to 99:
-
Type raid1
-
Monitoring monitored
-
Raid Data LV 0
-
Logical volume mirrorvol_rimage_0
-
Logical extents 0 to 99
-
Raid Data LV 1
-
Logical volume mirrorvol_rimage_1
-
Logical extents 0 to 99
-
Raid Metadata LV 0 mirrorvol_rmeta_0
-
Raid Metadata LV 1 mirrorvol_rmeta_1
Test Linaer / Striped LV
Preparing a plurality of disk
Add four virtual disk in a test environment, size is 10GB.
Create a type with four virtual disk is linear, size is Linaer LV 20GB from behind LV inquiry details can be seen, this is actually across the LV 3 PV.
-
root@hunk-virtual-machine:/home/hunk # pvcreate /dev/sd[cdef]
-
Physical volume "/dev/sdc" successfully created
-
Physical volume "/dev/sdd" successfully created
-
Physical volume "/dev/sde" successfully created
-
Physical volume "/dev/sdf" successfully created
-
root@hunk-virtual-machine:/home/hunk # vgcreate VolGroup1 /dev/sd[cdef]
-
Volume group "VolGroup1" successfully created
-
root@hunk-virtual-machine: /home/hunk# lvcreate -L 20G -n linnervol VolGroup1
-
Logical volume "linnervol" created.
-
root@hunk-virtual-machine: /home/hunk# mkfs.ext4 /dev/VolGroup1/linnervol
-
root@hunk-virtual-machine: /home/hunk# mount /dev/VolGroup1/linnervol /volumetest
-
root@hunk-virtual-machine: /home/hunk# df -h |grep linnervol
-
/ dev / folders / VolGroup1-linnervol 20G 44M 19G 1% / volume test
Linear LV test
Now, with bonnie ++ to simulate the IO, data is written to keep this in LV.
-
root@hunk-virtual-machine:/volumetest # bonnie++ -n 0 -u 0 -r `free -m | grep 'Mem:' | awk '{print $2}'` -s $(echo "scale=0;`free -m | grep 'Mem:' | awk '{print $2}'`*2" | bc -l) -f -b -d /volumetest/
-
Using uid:0, gid:0.
-
Writing intelligently...
With bwm-ng to monitor disk IO rate of 4 new window, we found that I only above sdc / O requests, but other disk are idle, watching sdc a busy guy.
bwm-ng -i disk -I sdc,sdd,sde,sdf
-
bwm-ng v0.6 (probing every 0.500s), press 'h' for help
-
input: disk IO type: rate
-
\ iface Rx Tx Total
-
==============================================================================
-
sdc: 0.00 KB/s 12263.47 KB/s 12263.47 KB/s
-
sdd: 0.00 KB/s 0.00 KB/s 0.00 KB/s
-
sde: 0.00 KB/s 0.00 KB/s 0.00 KB/s
-
sdf: 0.00 KB/s 0.00 KB/s 0.00 KB/s
-
------------------------------------------------------------------------------
-
total: 0.00 KB/s 12263.47 KB/s 12263.47 KB/s
We continue to view the amount of data written LV until the data is written more than 10G, it found that sdc is no longer processing I / O requests, because the data is already filled thing. Sdd start and continue processing continuous I / O requests. 10G writes data in a little more time, in the middle there is actually excessive process is sdc and sdf in processing I / O, this is caused because the buffer.
-
root@hunk-virtual-machine:/home/hunk # df -h |grep linner
-
/dev/mapper/VolGroup1-linnervol 20G 11G 8.1G 57% /volumetest
-
bwm-ng v0.6 (probing every 0.500s), press 'h' for help
-
input: disk IO type: rate
-
| iface Rx Tx Total
-
==============================================================================
-
sdc: 0.00 KB/s 0.00 KB/s 0.00 KB/s
-
sdd: 0.00 KB/s 12263.47 KB/s 12263.47 KB/s
-
sde: 0.00 KB/s 0.00 KB/s 0.00 KB/s
-
sdf: 0.00 KB/s 0.00 KB/s 0.00 KB/s
-
------------------------------------------------------------------------------
-
total: 0.00 KB/s 12263.47 KB/s 12263.47 KB/s
Test Stripe LV
Linear LV remove previously used
-
root@hunk-virtual-machine:/home # lvremove /dev/VolGroup1/linnervol
-
Do you really want to remove and DISCARD active logical volume linnervol? [y/n]: y
-
Logical volume "linnervol" successfully removed
Creating a striped LV
-
root@hunk-virtual-machine:/home # lvcreate -L 20G --stripes 4 --stripesize 256 --name stripevol VolGroup1
-
WARNING: ext4 signature detected on /dev/VolGroup1/stripevol at offset 1080. Wipe it? [y/n]: y
-
Wiping ext4 signature on /dev/VolGroup1/stripevol.
-
Logical volume "stripevol" created.
-
root@hunk-virtual-machine:/home # lvdisplay /dev/VolGroup1/stripevol -m
-
--- Logical volume ---
-
LV Path /dev/VolGroup1/stripevol
-
LV Name stripevol
-
VG Name VolGroup1
-
LV UUID z0MGOg-g6JL-hiE8-9Gt0-RZAJ-K29m-I6tcrS
-
LV Write Access read/write
-
LV Creation host, time hunk-virtual-machine, 2018-11-27 01:45:41 +0800
-
LV Status available
-
# open 0
-
LV Size 20.00 GiB
-
Current LE 5120
-
Segments 1
-
Allocation inherit
-
Read ahead sectors auto
-
- currently set to 4096
-
Block device 252:6
-
-
--- Segments ---
-
5119 0 extents to the Logical: #Striped the LV mapping PE evenly distributed on the four PV
-
Type striped
-
Stripes 4
-
Stripe size 256.00 KiB
-
Stripe 0:
-
Physical volume /dev/sdc
-
Physical extents 0 to 1279
-
Stripe 1:
-
Physical volume /dev/sdd
-
Physical extents 0 to 1279
-
Stripe 2:
-
Physical volume /dev/sde
-
Physical extents 0 to 1279
-
Stripe 3:
-
Physical volume /dev/sdf
-
Physical extents 0 to 1279
-
root@hunk-virtual-machine:/home # mkfs.ext4 /dev/VolGroup1/stripevol
-
mke2fs 1.42.13 (17-May-2015)
-
Creating filesystem with 5242880 4k blocks and 1310720 inodes
-
Filesystem UUID: 51dbdea0-48fc-4324-9974-42443e424aa0
-
Superblock backups stored on blocks:
-
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
-
4096000
-
-
Allocating group tables: done
-
Writing inode tables: done
-
Creating journal (32768 blocks): done
-
Writing superblocks and filesystem accounting information: done
-
-
root@hunk-virtual-machine:/home # mount /dev/VolGroup1/stripevol /volumetest/
-
root@hunk-virtual-machine:/home # df -h |grep stripe
-
/dev/mapper/VolGroup1-stripevol 20G 44M 19G 1% /volumetest
This method of testing the same striped LV, but here the test rough, not only ignoring many test elements, in front of Linear LV I bwm the test / O rate per 0.5s time sample values, and here take I / O rate is the mean Striped LV in the 30s. But we want here is not accurate I / O rates, it would not be right to consider these factors. 4 can be seen clearly in the parallel processing disk I / O requests, i.e. Striped LV to the I / O request is eventually distributed to the plurality of disk above the bottom, so that the polymerized I / O bound efficiency several times higher.
bwm-ng -i disk -I sdc,sdd,sde,sdf
-
bwm-ng v0.6 (probing every 0.500s), press 'h' for help
-
INPUT: Disk the IO type: AVG (30s) - averaging the 30S employed
-
/ iface Rx Tx Total
-
==============================================================================
-
sdc: 0.13 KB/s 10010.92 KB/s 10011.05 KB/s
-
sdd: 0.00 KB/s 10174.32 KB/s 10174.32 KB/s
-
sde: 0.00 KB/s 6563.85 KB/s 6563.85 KB/s
-
sdf: 0.00 KB/s 6113.09 KB/s 6113.09 KB/s
-
------------------------------------------------------------------------------
-
total: 0.13 KB/s 32862.18 KB/s 32862.32 KB/s