Linux should learn this way-logical volumes

LVM Logical Volume Manager

Chapter 7 uses RAID and LVM disk array technology.  Chapter 7 uses RAID and LVM disk array technology.

 Deploy logical volume

Commonly used LVM deployment commands

Function/Command Physical volume management Volume group management Logical volume management
scanning pvscan vgscan lvscan
set up pvcreate vgcreate lvcreate
display pvdisplay vgdisplay lvdisplay
delete pvremove vgremove lvremove
Expand   vgextend lvextend
Zoom out   vgreduce lvreduce

Let the newly added two hard disk devices support LVM technology 

[root@slave1 ~]# ll /dev/sd*
brw-rw----. 1 root disk 8,  0 Jan 24 18:53 /dev/sda
brw-rw----. 1 root disk 8,  1 Jan 24 18:53 /dev/sda1
brw-rw----. 1 root disk 8,  2 Jan 24 18:53 /dev/sda2
brw-rw----. 1 root disk 8, 16 Jan 24 18:53 /dev/sdb
brw-rw----. 1 root disk 8, 32 Jan 24 18:53 /dev/sdc
[root@slave1 ~]# pvcreate /dev/sdb /dev/sdc
  Physical volume "/dev/sdb" successfully created.
  Physical volume "/dev/sdc" successfully created.
[root@slave1 ~]# 

Add two hard disk devices to the storage volume group, and then check the status of the volume group

[root@slave1 ~]# vgcreate storage /dev/sdb /dev/sdc
  Volume group "storage" successfully created
[root@slave1 ~]# vgdisplay
  --- Volume group ---
  VG Name               storage
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               39.99 GiB
  PE Size               4.00 MiB
  Total PE              10238
  Alloc PE / Size       0 / 0   
  Free  PE / Size       10238 / 39.99 GiB
  VG UUID               z5j0JI-BSuO-IuYC-AK6i-Tnrx-OD4T-fpZ1qA
   
  --- Volume group ---
  VG Name               rhel
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <59.00 GiB
  PE Size               4.00 MiB
  Total PE              15103
  Alloc PE / Size       15103 / <59.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               epIxAe-PdPy-19Wd-BL26-PViU-8fpc-bK3DJk

Cut out a logical volume device of about 150MB .

[root@slave1 ~]# lvcreate -n vo -L 150M storage
  Rounding up size to full physical extent 152.00 MiB
  Logical volume "vo" created.
[root@slave1 ~]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/storage/vo
  LV Name                vo
  VG Name                storage
  LV UUID                4pldYD-2cRC-YnOO-KUfX-orvz-8fVr-YRYCEf
  LV Write Access        read/write
  LV Creation host, time slave1, 2021-01-24 18:59:09 +0800
  LV Status              available
  # open                 0
  LV Size                152.00 MiB
  Current LE             38
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3
   
  --- Logical volume ---
  LV Path                /dev/rhel/swap
  LV Name                swap
  VG Name                rhel
  LV UUID                D81kEP-mLtB-sjFL-l086-KHpq-rEA4-1SSxq1
  LV Write Access        read/write
  LV Creation host, time localhost, 2020-11-01 11:12:25 +0800
  LV Status              available
  # open                 2
  LV Size                <2.05 GiB
  Current LE             524
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/rhel/home
  LV Name                home
  VG Name                rhel
  LV UUID                v8PBeR-sRyS-fN2c-s0gN-gXgL-aw80-uQNUCr
  LV Write Access        read/write
  LV Creation host, time localhost, 2020-11-01 11:12:25 +0800
  LV Status              available
  # open                 1
  LV Size                18.68 GiB
  Current LE             4783
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2
   
  --- Logical volume ---
  LV Path                /dev/rhel/root
  LV Name                root
  VG Name                rhel
  LV UUID                5x4xKG-nMMr-AnZM-mXGG-Tt8X-X2vs-In0w5r
  LV Write Access        read/write
  LV Creation host, time localhost, 2020-11-01 11:12:26 +0800
  LV Status              available
  # open                 1
  LV Size                <38.27 GiB
  Current LE             9796
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0
   
[root@slave1 ~]# 

Format the generated logical volume and mount it for use

[root@slave1 ~]#  mkfs.ext4 /dev/storage/vo 
mke2fs 1.44.3 (10-July-2018)
Creating filesystem with 155648 1k blocks and 38912 inodes
Filesystem UUID: b438efec-52d2-4cfe-930c-054b7ef1c09b
Superblock backups stored on blocks: 
	8193, 24577, 40961, 57345, 73729

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done 

[root@slave1 ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               969M     0  969M   0% /dev
tmpfs                  984M     0  984M   0% /dev/shm
tmpfs                  984M  9.6M  974M   1% /run
tmpfs                  984M     0  984M   0% /sys/fs/cgroup
/dev/mapper/rhel-root   39G  4.1G   35G  11% /
/dev/mapper/rhel-home   19G  178M   19G   1% /home
/dev/sda1             1014M  153M  862M  15% /boot
tmpfs                  197M   16K  197M   1% /run/user/42
tmpfs                  197M  3.5M  194M   2% /run/user/1000
/dev/sr0               6.7G  6.7G     0 100% /run/media/linuxprobe/RHEL-8-0-0-BaseOS-x86_64
tmpfs                  197M  4.0K  197M   1% /run/user/0
[root@slave1 ~]# mkdir /LVM_XX
[root@slave1 ~]# mount /dev/storage/vo /LVM_XX
[root@slave1 ~]# echo "/dev/storage/vo /LVM_XX ext4 defaults 0 0" >> /etc/fstab
[root@slave1 ~]# df -h
Filesystem              Size  Used Avail Use% Mounted on
devtmpfs                969M     0  969M   0% /dev
tmpfs                   984M     0  984M   0% /dev/shm
tmpfs                   984M  9.6M  974M   1% /run
tmpfs                   984M     0  984M   0% /sys/fs/cgroup
/dev/mapper/rhel-root    39G  4.1G   35G  11% /
/dev/mapper/rhel-home    19G  178M   19G   1% /home
/dev/sda1              1014M  153M  862M  15% /boot
tmpfs                   197M   16K  197M   1% /run/user/42
tmpfs                   197M  3.5M  194M   2% /run/user/1000
/dev/sr0                6.7G  6.7G     0 100% /run/media/linuxprobe/RHEL-8-0-0-BaseOS-x86_64
tmpfs                   197M  4.0K  197M   1% /run/user/0
/dev/mapper/storage-vo  144M  1.6M  132M   2% /LVM_XX
[root@slave1 ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sun Nov  1 11:12:31 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/rhel-root   /                       xfs     defaults        0 0
UUID=a9745542-52aa-46e9-8559-c7eadd2e4b20 /boot                   xfs     defaults        0 0
/dev/mapper/rhel-home   /home                   xfs     defaults        0 0
/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
/dev/storage/vo /LVM_XX ext4 defaults 0 0
[root@slave1 ~]# 

Expand logical volume

Extend the logical volume vo in the previous experiment to 290MB

[root@slave1 ~]# umount /LVM_XX 
[root@slave1 ~]#  lvextend -L 290M /dev/storage/vo
  Rounding size to boundary between physical extents: 292.00 MiB.
  Size of logical volume storage/vo changed from 152.00 MiB (38 extents) to 292.00 MiB (73 extents).
  Logical volume storage/vo successfully resized.
[root@slave1 ~]# 

Check the integrity of the hard drive and reset the hard drive capacity.

[root@slave1 ~]#  e2fsck -f /dev/storage/vo
e2fsck 1.44.3 (10-July-2018)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/storage/vo: 11/38912 files (0.0% non-contiguous), 10567/155648 blocks
[root@slave1 ~]# resize2fs /dev/storage/vo
resize2fs 1.44.3 (10-July-2018)
Resizing the filesystem on /dev/storage/vo to 299008 (1k) blocks.
The filesystem on /dev/storage/vo is now 299008 (1k) blocks long.

 Remount the hard disk device and check the mounting status

[root@slave1 ~]# mount -a
[root@slave1 ~]# df -h
Filesystem              Size  Used Avail Use% Mounted on
devtmpfs                969M     0  969M   0% /dev
tmpfs                   984M     0  984M   0% /dev/shm
tmpfs                   984M  9.6M  974M   1% /run
tmpfs                   984M     0  984M   0% /sys/fs/cgroup
/dev/mapper/rhel-root    39G  4.1G   35G  11% /
/dev/mapper/rhel-home    19G  178M   19G   1% /home
/dev/sda1              1014M  153M  862M  15% /boot
tmpfs                   197M   16K  197M   1% /run/user/42
tmpfs                   197M  3.5M  194M   2% /run/user/1000
/dev/sr0                6.7G  6.7G     0 100% /run/media/linuxprobe/RHEL-8-0-0-BaseOS-x86_64
tmpfs                   197M  4.0K  197M   1% /run/user/0
/dev/mapper/storage-vo  279M  2.1M  259M   1% /LVM_XX
[root@slave1 ~]# 

Shrink logical volume

Check the integrity of the file system

[root@slave1 ~]# umount /LVM_XX 
[root@slave1 ~]# e2fsck -f /dev/storage/vo
e2fsck 1.44.3 (10-July-2018)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/storage/vo: 11/75776 files (0.0% non-contiguous), 15729/299008 blocks
[root@slave1 ~]# 

Reduce the capacity of the logical volume vo to 120MB.

[root@slave1 ~]# resize2fs /dev/storage/vo 120M
resize2fs 1.44.3 (10-July-2018)
Resizing the filesystem on /dev/storage/vo to 122880 (1k) blocks.
The filesystem on /dev/storage/vo is now 122880 (1k) blocks long.

[root@slave1 ~]# lvreduce -L 120M /dev/storage/vo
  WARNING: Reducing active logical volume to 120.00 MiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce storage/vo? [y/n]: y
  Size of logical volume storage/vo changed from 292.00 MiB (73 extents) to 120.00 MiB (30 extents).
  Logical volume storage/vo successfully resized.
[root@slave1 ~]# 

Remount the file system and view the system status

[root@slave1 ~]# mount -a
[root@slave1 ~]# df -h
Filesystem              Size  Used Avail Use% Mounted on
devtmpfs                969M     0  969M   0% /dev
tmpfs                   984M     0  984M   0% /dev/shm
tmpfs                   984M  9.6M  974M   1% /run
tmpfs                   984M     0  984M   0% /sys/fs/cgroup
/dev/mapper/rhel-root    39G  4.1G   35G  11% /
/dev/mapper/rhel-home    19G  178M   19G   1% /home
/dev/sda1              1014M  153M  862M  15% /boot
tmpfs                   197M   16K  197M   1% /run/user/42
tmpfs                   197M  3.5M  194M   2% /run/user/1000
/dev/sr0                6.7G  6.7G     0 100% /run/media/linuxprobe/RHEL-8-0-0-BaseOS-x86_64
tmpfs                   197M  4.0K  197M   1% /run/user/0
/dev/mapper/storage-vo  113M  1.6M  103M   2% /LVM_XX
[root@slave1 ~]# 

Logical volume snapshot

First check the information of the volume group. 

[root@slave1 ~]# vgdisplay
  --- Volume group ---
  VG Name               storage
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               39.99 GiB
  PE Size               4.00 MiB
  Total PE              10238
  Alloc PE / Size       30 / 120.00 MiB
  Free  PE / Size       10208 / <39.88 GiB
  VG UUID               z5j0JI-BSuO-IuYC-AK6i-Tnrx-OD4T-fpZ1qA
   
  --- Volume group ---
  VG Name               rhel
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <59.00 GiB
  PE Size               4.00 MiB
  Total PE              15103
  Alloc PE / Size       15103 / <59.00 GiB
  Free  PE / Size       0 / 0   
  VG UUID               epIxAe-PdPy-19Wd-BL26-PViU-8fpc-bK3DJk
   
[root@slave1 ~]# 

It can be clearly seen from the output information of the volume group that 10208 capacity has been used in the volume group, and there is still 39.88GB of free capacity. Next, use redirection to write a file to the directory mounted by the logical volume device

[root@slave1 ~]# echo "Welcome to Linuxprobe.com" > /LVM_XX/readme.txt
[root@slave1 ~]# ls -l /LVM_XX/readme.txt
-rw-r--r--. 1 root root 26 Jan 24 22:09 /LVM_XX/readme.txt

1. Use the -s parameter to generate a snapshot volume, and use the -L parameter to specify the size of the cut. In addition, you also need to write the snapshot operation for which logical volume is performed after the command

[root@slave1 /]# lvcreate -L 120M -s -n SNAP /dev/storage/vo
  Logical volume "SNAP" created.
[root@slave1 /]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/storage/vo
  LV Name                vo
  VG Name                storage
  LV UUID                4pldYD-2cRC-YnOO-KUfX-orvz-8fVr-YRYCEf
  LV Write Access        read/write
  LV Creation host, time slave1, 2021-01-24 18:59:09 +0800
  LV snapshot status     source of
                         SNAP [active]
  LV Status              available
  # open                 1
  LV Size                120.00 MiB
  Current LE             30
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3
   
  --- Logical volume ---
  LV Path                /dev/storage/SNAP
  LV Name                SNAP
  VG Name                storage
  LV UUID                78YABh-TEUM-YrY9-Tuei-WT2R-VoBT-5XU4Gx
  LV Write Access        read/write
  LV Creation host, time slave1, 2021-01-24 21:52:43 +0800
  LV snapshot status     active destination for vo
  LV Status              available
  # open                 0
  LV Size                120.00 MiB
  Current LE             30
  COW-table size         120.00 MiB
  COW-table LE           30
  Allocated to snapshot  0.01%
  Snapshot chunk size    4.00 KiB
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:6
   
  --- Logical volume ---
  LV Path                /dev/rhel/swap
  LV Name                swap
  VG Name                rhel
  LV UUID                D81kEP-mLtB-sjFL-l086-KHpq-rEA4-1SSxq1
  LV Write Access        read/write
  LV Creation host, time localhost, 2020-11-01 11:12:25 +0800
  LV Status              available
  # open                 2
  LV Size                <2.05 GiB
  Current LE             524
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/rhel/home
  LV Name                home
  VG Name                rhel
  LV UUID                v8PBeR-sRyS-fN2c-s0gN-gXgL-aw80-uQNUCr
  LV Write Access        read/write
  LV Creation host, time localhost, 2020-11-01 11:12:25 +0800
  LV Status              available
  # open                 1
  LV Size                18.68 GiB
  Current LE             4783
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2
   
  --- Logical volume ---
  LV Path                /dev/rhel/root
  LV Name                root
  VG Name                rhel
  LV UUID                5x4xKG-nMMr-AnZM-mXGG-Tt8X-X2vs-In0w5r
  LV Write Access        read/write
  LV Creation host, time localhost, 2020-11-01 11:12:26 +0800
  LV Status              available
  # open                 1
  LV Size                <38.27 GiB
  Current LE             9796
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0
   
[root@slave1 /]# 

2. Create a 100MB junk file in the directory where the logical volume is mounted, and then check the status of the snapshot volume. It can be found whether the usage of storage space has increased

 

[root@slave1 ~]# echo "Welcome to Linuxprobe.com" > /LVM_XX/readme.txt
[root@slave1 ~]# ls -l /LVM_XX/readme.txt
-rw-r--r--. 1 root root 26 Jan 24 22:09 /LVM_XX/readme.txt
[root@slave1 ~]# dd if=/dev/zero of=/LVM_XX/files count=1 bs=100M
1+0 records in
1+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 1.18871 s, 88.2 MB/s
[root@slave1 ~]# lvdisplay
  --- Logical volume ---
  LV Path                /dev/storage/vo
  LV Name                vo
  VG Name                storage
  LV UUID                iQ3yjL-T3A8-B52O-rsyf-WAQG-tty5-3GsCYM
  LV Write Access        read/write
  LV Creation host, time slave1, 2021-01-24 22:02:43 +0800
  LV snapshot status     source of
                         SNAP [active]
  LV Status              available
  # open                 1
  LV Size                120.00 MiB
  Current LE             30
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:3
   
  --- Logical volume ---
  LV Path                /dev/storage/SNAP
  LV Name                SNAP
  VG Name                storage
  LV UUID                3N9S4y-lwbZ-IyPY-EklL-tuAW-iYfP-meWFQp
  LV Write Access        read/write
  LV Creation host, time slave1, 2021-01-24 22:07:25 +0800
  LV snapshot status     active destination for vo
  LV Status              available
  # open                 0
  LV Size                120.00 MiB
  Current LE             30
  COW-table size         120.00 MiB
  COW-table LE           30
  Allocated to snapshot  83.74%
  Snapshot chunk size    4.00 KiB
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:6
   
  --- Logical volume ---
  LV Path                /dev/rhel/swap
  LV Name                swap
  VG Name                rhel
  LV UUID                D81kEP-mLtB-sjFL-l086-KHpq-rEA4-1SSxq1
  LV Write Access        read/write
  LV Creation host, time localhost, 2020-11-01 11:12:25 +0800
  LV Status              available
  # open                 2
  LV Size                <2.05 GiB
  Current LE             524
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:1
   
  --- Logical volume ---
  LV Path                /dev/rhel/home
  LV Name                home
  VG Name                rhel
  LV UUID                v8PBeR-sRyS-fN2c-s0gN-gXgL-aw80-uQNUCr
  LV Write Access        read/write
  LV Creation host, time localhost, 2020-11-01 11:12:25 +0800
  LV Status              available
  # open                 1
  LV Size                18.68 GiB
  Current LE             4783
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2
   
  --- Logical volume ---
  LV Path                /dev/rhel/root
  LV Name                root
  VG Name                rhel
  LV UUID                5x4xKG-nMMr-AnZM-mXGG-Tt8X-X2vs-In0w5r
  LV Write Access        read/write
  LV Creation host, time localhost, 2020-11-01 11:12:26 +0800
  LV Status              available
  # open                 1
  LV Size                <38.27 GiB
  Current LE             9796
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:0
   
[root@slave1 ~]# 

 3. In order to verify the effect of the SNAP snapshot volume, it is necessary to perform a snapshot restore operation on the logical volume. Before that, remember to unmount the logical volume device and the mounting of the directory: the snapshot volume will be automatically deleted, and the 100MB junk file created just after the logical volume device is snapshot is also cleared.

[root@slave1 ~]# umount /LVM_XX 
[root@slave1 ~]#  lvconvert --merge /dev/storage/SNAP
  Merging of volume storage/SNAP started.
  storage/vo: Merged: 29.03%
  storage/vo: Merged: 100.00%
[root@slave1 ~]# ls -l /LVM_XX/
total 0
[root@slave1 ~]# 

Delete logical volume

Cancel the mounting association between the logical volume and the directory, delete the permanently effective device parameters in the configuration file, first umount /LVM_XX, and then delete the configuration in the configuration file

To delete a logical volume device , you need to enter y to confirm the operation

Delete the volume group, just write the volume group name here, the absolute path of the device is not required

Delete physical volume device 

That is, the deletion order is the logical volume device, except the volume group, the physical volume device 

 

Guess you like

Origin blog.csdn.net/yanghuadong_1992/article/details/113094908