Linux LVM-- three kinds Logic Volume

Outline

In order to meet the needs in terms of performance and redundancy, LVM support the following three Logic Volume:

  1. Linear Logic Volume - linear logical volume
  2. Striped Logic Volume - striped logical volumes
  3. Mirror Logic Volume - mirrored logical volumes

Linear Logic Volume

We use lvcreate default order is created out of the linear logical, linear logical volume of PE can come from a PV, also from multiple PV, under normal circumstances is allocated first to start a PE PV, if the PV of PE It has been assigned over, and then assigned sequentially from the second PV, PV in the third. You can let Linear LV segments designated by the number of PV even PE PE dispersed to each PV, PV but if one is broken, then the Linear LV may also can not be used. The size can be specified Linear LV size directly -L, PE may be used to specify the number of allocated -l. When data is written to the Linear LV, first to PE first PV in writing, on the first space until the PV allocation runs out will write data to second PV.

Linear LV can only meet the demand elasticity distribution can not meet the performance and redundancy requirements, is the most common volume, but can also be switched to Linear LV Mirror LV by lvconvert order to provide redundancy.

  1.  
    root@hunk-virtual-machine:/home/hunk # lvcreate -l 100 -n linearlv VolGroup1 /dev/sdc:1280-1305 /dev/sdd:1280-1
  2.  
    305 /dev/sde:1280-1305 /dev/sdf:1280-1305

Striped Logic Volume

Striped LV underlying storage layout similar to RAID0, which is across a plurality of PV, in particular with a specified number of PV span -i, but certainly not the number of the VG exceeds PV, Striped LV maximum size depends on the remaining PE least the PV.

Striping means that each participating Striping PV divided into the size of the chunk and the like (also called stripe unit), these same location each PV chunk together form a stripe. For example, this chart below (from RedHat6 official documents), contains three PV, then 1,2,3 three chunk red logo on the formation of stripe1,4,5,6 composition stripe2. The chunk size can be specified or -I --stripesize, but can not exceed the size of the PE.

For example, when writing data to Striped LV, data is divided into the size of the chunk and the like, and then writing these PV chunk of these sequences. In this case there will be a plurality of disk drive underlying concurrent processing I / O requests, the polymerization can be multiplied to obtain I / O performance. Or below this figure, if there are a data block to be written 4M 512K LV, stripesize set sections, LVM chunk cut it into eight, respectively denoted as chunk1, chunk2 ..., the order of the chunk is written as follows PV :

  1. chunk1 written to PV1
  2. chunk2 written to PV2
  3. chunk3 written to PV3
  4. chunk4 written to PV1
  5. ...

Because LVM can not determine whether a plurality of Physics Volume Disk from the same underlying, if a plurality of Physics Volume Striped LV actually using different partitions on the same physical disk, will lead to a data block is cut into a plurality of division multiple chunk once issued with a disk drive, this case actually Striped LV does not improve performance, but degrade performance. Therefore, the nature of Striped LV improve I / O performance of the disk drive is to make parallel processing more underlying I / O request, rather than the I / O to the plurality of PV dispersed on the surface.

Striped LV mainly to meet performance requirements, do not have any redundancy, so there is no fault tolerance, if a single disk is damaged, it will cause data corruption.

root@hunk-virtual-machine:/home# lvcreate -L 20G --stripes 4 --stripesize 256 --name stripevol VolGroup1
 

Mirror Logic Volume

Mirror LV is made between the individual PV redundancy, like RAID1, specify the number of redundancy -m. Mirror LV provide redundancy, can effectively solve the problem of disk single point of failure, but the performance did not help. Linear LV and Mirror LV directly with each other lvconvert tools to switch, Mirror LV can also change the number of redundant after creating the specific usage please refer to the man page.

  1.  
    root@hunk-virtual-machine:/home/hunk # lvcreate -l 100 -m1 -n mirrorvol VolGroup1
  2.  
    Logical volume "mirrorvol" created.
  3.  
    root@hunk-virtual-machine:/home/hunk # lvdisplay /dev/VolGroup1/mirrorvol -m
  4.  
    --- Logical volume ---
  5.  
    LV Path /dev/VolGroup1/mirrorvol
  6.  
    LV Name mirrorvol
  7.  
    VG Name VolGroup1
  8.  
    LV UUID YxgfYi-c7nK-wk4v-rlu1-vRdh-MTMb-uVfl2v
  9.  
    LV Write Access read/write
  10.  
    LV Creation host, time hunk-virtual-machine, 2018-11-29 01:39:44 +0800
  11.  
    LV Status available
  12.  
    # open 0
  13.  
    LV Size 400.00 MiB
  14.  
    Current LE 100
  15.  
    Mirrored volumes 2
  16.  
    Segments 1
  17.  
    Allocation inherit
  18.  
    Read ahead sectors auto
  19.  
    - currently set to 256
  20.  
    Block device 252:8
  21.  
     
  22.  
    --- Segments ---
  23.  
    Logical extents 0 to 99:
  24.  
    Type raid1
  25.  
    Monitoring monitored
  26.  
    Raid Data LV 0
  27.  
    Logical volume mirrorvol_rimage_0
  28.  
    Logical extents 0 to 99
  29.  
    Raid Data LV 1
  30.  
    Logical volume mirrorvol_rimage_1
  31.  
    Logical extents 0 to 99
  32.  
    Raid Metadata LV 0 mirrorvol_rmeta_0
  33.  
    Raid Metadata LV 1 mirrorvol_rmeta_1

Test Linaer / Striped LV

Preparing a plurality of disk

Add four virtual disk in a test environment, size is 10GB.

Create a type with four virtual disk is linear, size is Linaer LV 20GB from behind LV inquiry details can be seen, this is actually across the LV 3 PV.

  1.  
    root@hunk-virtual-machine:/home/hunk # pvcreate /dev/sd[cdef]
  2.  
    Physical volume "/dev/sdc" successfully created
  3.  
    Physical volume "/dev/sdd" successfully created
  4.  
    Physical volume "/dev/sde" successfully created
  5.  
    Physical volume "/dev/sdf" successfully created
  1.  
    root@hunk-virtual-machine:/home/hunk # vgcreate VolGroup1 /dev/sd[cdef]
  2.  
    Volume group "VolGroup1" successfully created
  1.  
    root@hunk-virtual-machine: /home/hunk# lvcreate -L 20G -n linnervol VolGroup1
  2.  
    Logical volume "linnervol" created.
  1.  
    root@hunk-virtual-machine: /home/hunk# mkfs.ext4 /dev/VolGroup1/linnervol
  2.  
    root@hunk-virtual-machine: /home/hunk# mount /dev/VolGroup1/linnervol /volumetest
  3.  
    root@hunk-virtual-machine: /home/hunk# df -h |grep linnervol
  4.  
    / dev / folders / VolGroup1-linnervol 20G 44M 19G 1% / volume test

Linear LV test

Now, with bonnie ++ to simulate the IO, data is written to keep this in LV.

  1.  
    root@hunk-virtual-machine:/volumetest # bonnie++ -n 0 -u 0 -r `free -m | grep 'Mem:' | awk '{print $2}'` -s $(echo "scale=0;`free -m | grep 'Mem:' | awk '{print $2}'`*2" | bc -l) -f -b -d /volumetest/
  2.  
    Using uid:0, gid:0.
  3.  
    Writing intelligently...

With bwm-ng to monitor disk IO rate of 4 new window, we found that I only above sdc / O requests, but other disk are idle, watching sdc a busy guy.

bwm-ng -i disk -I sdc,sdd,sde,sdf
 
  1.  
    bwm-ng v0.6 (probing every 0.500s), press 'h' for help
  2.  
    input: disk IO type: rate
  3.  
    \ iface Rx Tx Total
  4.  
    ==============================================================================
  5.  
    sdc: 0.00 KB/s 12263.47 KB/s 12263.47 KB/s
  6.  
    sdd: 0.00 KB/s 0.00 KB/s 0.00 KB/s
  7.  
    sde: 0.00 KB/s 0.00 KB/s 0.00 KB/s
  8.  
    sdf: 0.00 KB/s 0.00 KB/s 0.00 KB/s
  9.  
    ------------------------------------------------------------------------------
  10.  
    total: 0.00 KB/s 12263.47 KB/s 12263.47 KB/s

We continue to view the amount of data written LV until the data is written more than 10G, it found that sdc is no longer processing I / O requests, because the data is already filled thing. Sdd start and continue processing continuous I / O requests. 10G writes data in a little more time, in the middle there is actually excessive process is sdc and sdf in processing I / O, this is caused because the buffer.

  1.  
    root@hunk-virtual-machine:/home/hunk # df -h |grep linner
  2.  
    /dev/mapper/VolGroup1-linnervol 20G 11G 8.1G 57% /volumetest
  1.  
    bwm-ng v0.6 (probing every 0.500s), press 'h' for help
  2.  
    input: disk IO type: rate
  3.  
    | iface Rx Tx Total
  4.  
    ==============================================================================
  5.  
    sdc: 0.00 KB/s 0.00 KB/s 0.00 KB/s
  6.  
    sdd: 0.00 KB/s 12263.47 KB/s 12263.47 KB/s
  7.  
    sde: 0.00 KB/s 0.00 KB/s 0.00 KB/s
  8.  
    sdf: 0.00 KB/s 0.00 KB/s 0.00 KB/s
  9.  
    ------------------------------------------------------------------------------
  10.  
    total: 0.00 KB/s 12263.47 KB/s 12263.47 KB/s

Test Stripe LV

Linear LV remove previously used

  1.  
    root@hunk-virtual-machine:/home # lvremove /dev/VolGroup1/linnervol
  2.  
    Do you really want to remove and DISCARD active logical volume linnervol? [y/n]: y
  3.  
    Logical volume "linnervol" successfully removed

Creating a striped LV

  1.  
    root@hunk-virtual-machine:/home # lvcreate -L 20G --stripes 4 --stripesize 256 --name stripevol VolGroup1
  2.  
    WARNING: ext4 signature detected on /dev/VolGroup1/stripevol at offset 1080. Wipe it? [y/n]: y
  3.  
    Wiping ext4 signature on /dev/VolGroup1/stripevol.
  4.  
    Logical volume "stripevol" created.
  5.  
    root@hunk-virtual-machine:/home # lvdisplay /dev/VolGroup1/stripevol -m
  6.  
    --- Logical volume ---
  7.  
    LV Path /dev/VolGroup1/stripevol
  8.  
    LV Name stripevol
  9.  
    VG Name VolGroup1
  10.  
    LV UUID z0MGOg-g6JL-hiE8-9Gt0-RZAJ-K29m-I6tcrS
  11.  
    LV Write Access read/write
  12.  
    LV Creation host, time hunk-virtual-machine, 2018-11-27 01:45:41 +0800
  13.  
    LV Status available
  14.  
    # open 0
  15.  
    LV Size 20.00 GiB
  16.  
    Current LE 5120
  17.  
    Segments 1
  18.  
    Allocation inherit
  19.  
    Read ahead sectors auto
  20.  
    - currently set to 4096
  21.  
    Block device 252:6
  22.  
     
  23.  
    --- Segments ---
  24.  
    5119 0 extents to the Logical: #Striped the LV mapping PE evenly distributed on the four PV
  25.  
    Type striped
  26.  
    Stripes 4
  27.  
    Stripe size 256.00 KiB
  28.  
    Stripe 0:
  29.  
    Physical volume /dev/sdc
  30.  
    Physical extents 0 to 1279
  31.  
    Stripe 1:
  32.  
    Physical volume /dev/sdd
  33.  
    Physical extents 0 to 1279
  34.  
    Stripe 2:
  35.  
    Physical volume /dev/sde
  36.  
    Physical extents 0 to 1279
  37.  
    Stripe 3:
  38.  
    Physical volume /dev/sdf
  39.  
    Physical extents 0 to 1279
  1.  
    root@hunk-virtual-machine:/home # mkfs.ext4 /dev/VolGroup1/stripevol
  2.  
    mke2fs 1.42.13 (17-May-2015)
  3.  
    Creating filesystem with 5242880 4k blocks and 1310720 inodes
  4.  
    Filesystem UUID: 51dbdea0-48fc-4324-9974-42443e424aa0
  5.  
    Superblock backups stored on blocks:
  6.  
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
  7.  
    4096000
  8.  
     
  9.  
    Allocating group tables: done
  10.  
    Writing inode tables: done
  11.  
    Creating journal (32768 blocks): done
  12.  
    Writing superblocks and filesystem accounting information: done
  13.  
     
  14.  
    root@hunk-virtual-machine:/home # mount /dev/VolGroup1/stripevol /volumetest/
  15.  
    root@hunk-virtual-machine:/home # df -h |grep stripe
  16.  
    /dev/mapper/VolGroup1-stripevol 20G 44M 19G 1% /volumetest

This method of testing the same striped LV, but here the test rough, not only ignoring many test elements, in front of Linear LV I bwm the test / O rate per 0.5s time sample values, and here take I / O rate is the mean Striped LV in the 30s. But we want here is not accurate I / O rates, it would not be right to consider these factors. 4 can be seen clearly in the parallel processing disk I / O requests, i.e. Striped LV to the I / O request is eventually distributed to the plurality of disk above the bottom, so that the polymerized I / O bound efficiency several times higher.

bwm-ng -i disk -I sdc,sdd,sde,sdf
 

 

  1.  
    bwm-ng v0.6 (probing every 0.500s), press 'h' for help
  2.  
    INPUT: Disk the IO type: AVG (30s) - averaging the 30S employed
  3.  
    / iface Rx Tx Total
  4.  
    ==============================================================================
  5.  
    sdc: 0.13 KB/s 10010.92 KB/s 10011.05 KB/s
  6.  
    sdd: 0.00 KB/s 10174.32 KB/s 10174.32 KB/s
  7.  
    sde: 0.00 KB/s 6563.85 KB/s 6563.85 KB/s
  8.  
    sdf: 0.00 KB/s 6113.09 KB/s 6113.09 KB/s
  9.  
    ------------------------------------------------------------------------------
  10.  
    total: 0.13 KB/s 32862.18 KB/s 32862.32 KB/s

 

 

 

 

 

Guess you like

Origin www.cnblogs.com/xibuhaohao/p/11731699.html