In-depth analysis of Linux disk partitions | LVM logical volumes | VDO volumes | AutoFS storage automatic mounting

In-depth analysis of Linux disk partitions | LVM logical volumes | VDO volumes | AutoFS storage automatic mounting

Preface

  Be aware that the use of these techniques and tools may vary across Linux distributions, and you'll need to check the documentation and guides for the specific distribution you're using for more details and instructions.

1. Install operating system partition configuration

  Create /boota partition (size 50MB):
Insert image description here
 
  /boot/efiset the partition to 200MB and
Insert image description here
 
  Swapset the size of the swap partition to 2 times the physical memory size:
Insert image description here
 
  all remaining space is reserved for /the partition. Of course, you can also set the root partition to 50GB or 100GB, and then divide the remaining space Give /homepartitions and /varpartitions
Insert image description here

2. Large disk partition management

[root@localhost ~]# fdisk -l

Disk /dev/sda: 8393.0 GB, 8392996290560 bytes, 16392570880 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 262144 bytes / 1048576 bytes

  The total space that can be seen /dev/sdais about 8T, and the partition formatting operation is performed directly:

[root@localhost ~]# fdisk /dev/sda
[root@localhost ~]# mkfs.xfs /dev/sda1
[root@localhost ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                  63G     0   63G   0% /dev
tmpfs                     63G     0   63G   0% /dev/shm
tmpfs                     63G   11M   63G   1% /run
tmpfs                     63G     0   63G   0% /sys/fs/cgroup
/dev/mapper/centos-root   50G  2.0G   49G   4% /
/dev/sdb2               1014M  151M  864M  15% /boot
/dev/sdb1                200M   12M  189M   6% /boot/efi
/dev/mapper/centos-home  838G   34M  838G   1% /home
tmpfs                     13G     0   13G   0% /run/user/0
/dev/sda1                2.0T   34M  2.0T   1% /data

  The disk space is only 2T. The reason is that fdisk can only create a disk with a maximum partition of 2T. If it exceeds 2T, use parted to
  delete the disk partition created using fdisk:

[root@localhost ~]# fdisk /dev/sda
Command (m for help): p
Command (m for help): d
Selected partition 1
Partition 1 is deleted

Command (m for help): w

  Next use the parted tool to create partitions:

[root@localhost ~]# parted /dev/sda
GNU Parted 3.1
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p                                                                
Model: BROADCOM MR9560-16i (scsi)
Disk /dev/sda: 8393GB		# 磁盘空间最大8393GB	
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags: 

Number  Start  End  Size  Type  File system  Flags

(parted) mklabel gpt                                                      
Warning: The existing disk label on /dev/sda will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? y                                                                 
(parted) mkpart                                                           
Partition name?  []? sda1                                                 
File system type?  [ext2]? ext4                                           
Start? 0                                                                  
End?                                                                      
End? 8393GB                                                               
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? Ignore                                                     
(parted) p                                                                
Model: BROADCOM MR9560-16i (scsi)
Disk /dev/sda: 8393GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name  Flags
 1      17.4kB  8393GB  8393GB               sda1

(parted) quit                                                             
Information: You may need to update /etc/fstab.
[root@localhost ~]# fdisk -l                                              
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sda: 8393.0 GB, 8392996290560 bytes, 16392570880 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 262144 bytes / 1048576 bytes
Disk label type: gpt
Disk identifier: E43B9D7C-0D7E-470D-8BBA-FC6A7238D6C3


#         Start          End    Size  Type            Name
 1           34  16392570846    7.6T  Microsoft basic sda1
[root@localhost ~]# mkfs.ext4 /dev/sda1
[root@localhost ~]# mkdir /data
[root@localhost ~]# mount /dev/sda1 /data
[root@localhost ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                  63G     0   63G   0% /dev
tmpfs                     63G     0   63G   0% /dev/shm
tmpfs                     63G   11M   63G   1% /run
tmpfs                     63G     0   63G   0% /sys/fs/cgroup
/dev/mapper/centos-root   50G  2.0G   49G   4% /
/dev/sdb2               1014M  151M  864M  15% /boot
/dev/sdb1                200M   12M  189M   6% /boot/efi
/dev/mapper/centos-home  838G   34M  838G   1% /home
tmpfs                     13G     0   13G   0% /run/user/0
/dev/sda1                7.6T   93M  7.2T   1% /data

  View the UUID of the disk:

blkid
/dev/mapper/centos-root: UUID="6daff923-488c-4dd8-80ee-fb5faed8cca9" TYPE="xfs" 
/dev/sdb3: UUID="JW1wdc-hst2-omzt-QegM-XS5a-tShf-OeUu1a" TYPE="LVM2_member" PARTUUID="6d3d61ff-6f12-4537-8abc-e59075d23291" 
/dev/sda1: UUID="bcde2c04-0944-4ec3-9d16-cdfbff7f38ea" TYPE="ext4" PARTLABEL="sda1" PARTUUID="1a4db326-aa94-4c44-99b5-21d96388334a" 
/dev/sdb1: SEC_TYPE="msdos" UUID="B09B-3EEA" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="6f8129b9-3eae-4234-98b8-7cde65820c0c" 
/dev/sdb2: UUID="dc45cf08-fdca-4c5c-9ef5-94396644c548" TYPE="xfs" PARTUUID="54c0bb37-f49e-4f87-aaa3-96201d717bb7" 
/dev/mapper/centos-swap: UUID="1ca6f4e8-c154-4ec0-9200-74bf2d7c8e87" TYPE="swap" 
/dev/mapper/centos-home: UUID="a8c37d6a-51e1-44ec-b506-29614f931ad1" TYPE="xfs" 
[root@localhost ~]# vim /etc/fstab
UUID=bcde2c04-0944-4ec3-9d16-cdfbff7f38ea /data xfs     defaults        0 0 

  Test the read and write performance of the disk:

[root@localhost ~]# dumpe2fs /dev/sda1|grep -i "block Size"
dumpe2fs 1.42.9 (28-Dec-2013)
Block size:               4096

  Test write performance:

[root@localhost ~]# cd /data/
[root@localhost data]# time dd if=/dev/zero of=/testio bs=4k count=100000 oflag=direct oflag=sync
100000+0 records in
100000+0 records out
409600000 bytes (410 MB) copied, 9.6584 s, 42.4 MB/s
real	0m9.732s
user	0m0.103s
sys	0m4.798s


[root@localhost data]# dd if=/dev/zero of=/testio2 bs=4k count=100000 oflag=direct
100000+0 records in
100000+0 records out
409600000 bytes (410 MB) copied, 3.71845 s, 110 MB/s

# 添加oflag=direct将跳过内存缓存,添加oflag=sync将跳过hdd缓存

  Test read performance:

[root@localhost data]# time dd if=/dev/sda1 of=/dev/null bs=4k
^C4775424+0 records in
4775423+0 records out
19560132608 bytes (20 GB) copied, 12.6906 s, 1.5 GB/s

real	0m12.693s
user	0m0.804s
sys	0m10.713s

3. LVM logical volume management

Insert image description here
  LVM (Logical Volume Manager) is a mechanism for managing hard disk partitions under Linux systems. It establishes a logical layer on top of disks and partitions to manage disk partitions flexibly and efficiently, simplifying disk management operations. The size of the logical volume can be dynamically adjusted without losing existing data; even if a new disk is added, the existing logical volume will not be changed.

3.1. Create LVM logical volume

Insert image description here

3.1.1. Create physical volume PV

yum install -y lvm2

  View all hard drives on the host:

lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0   2.2T  0 disk 
sdb      8:16   0 893.1G  0 disk 
├─sdb1   8:17   0   190M  0 part /boot/efi
├─sdb2   8:18   0   976M  0 part /boot
├─sdb3   8:19   0   128G  0 part [SWAP]
└─sdb4   8:20   0   764G  0 part /
sdc      8:32   0   1.1T  0 disk 
sdd      8:48   0   1.1T  0 disk 
sde      8:64   0   2.2T  0 disk 

  Choose to use sda ​​and sde to create physical volumes, and separate multiple devices with spaces:

pvcreate /dev/sda /dev/sde

3.1.2. Create volume group VG

vgcreate <卷组名称> <物理卷名称> …… <物理卷名称>

  For example, create vg_01 volume group and add two physical volumes /dev/sda and /dev/sde:

vgcreate vg_01 /dev/sda /dev/sde

  If you need to add a new physical volume to the volume group, run the following command to add other created physical volumes:

vgextend <卷组名称> <物理卷名称> …… <物理卷名称>

  View volume group information:

vgs
  VG    #PV #LV #SN Attr   VSize VFree
  vg_01   2   0   0 wz--n- 4.36t 4.36t

3.1.3. Create logical volume LV

  Create a logical volume:

lvcreate -L <逻辑卷大小> -n <逻辑卷名称> <卷组名称>

# 逻辑卷大小:逻辑卷的大小应小于卷组剩余可用空间。
# 逻辑卷名称:由您自定义,例如lv01。
# 卷组名称:创建卷组已经创建的卷组的名称,例如vg_01
lvcreate -L 10g -n lv01 vg_01

3.1.4. Create and mount the file system

  Obtain information such as logical volume path, name, and volume group to which it belongs. You will need to use it for subsequent steps.

lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg_01/lv01
  LV Name                lv01
  VG Name                vg_01
  LV UUID                4fx5f3-BA2H-yCcE-2mys-p0FW-umQS-0ur97v
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2023-07-13 23:25:53 -0400
  LV Status              available
  # open                 0
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

# LV Path:逻辑卷路径,例如/dev/vg_01/lv01
# LV Name:逻辑卷的名称,例如lv01
# VG Name:逻辑卷所属的卷组名称,例如vg_01
# LV Size:逻辑卷的大小,图示信息为10 GiB

  Create a file system on the logical volume:

mkfs.<文件系统格式> <逻辑卷路径>

  Take the logical volume path as /dev/vg_01/lv01 and the file system as ext4 as an example to format:

mkfs.ext4 /dev/vg_01/lv01

  Create a new mount point, such as /media/lv01, and mount the file system to the mount point:

mkdir /media/lv01
mount /dev/vg_01/lv01 /media/lv01

  Check the mounting results of logical volumes:

df -h
Filesystem              Size  Used Avail Use% Mounted on
......
/dev/mapper/vg_01-lv01  9.8G   37M  9.2G   1% /media/lv01

3.1.5. Configure automatic mounting at startup

  If you need to set up a logical volume to be automatically mounted at startup, you need to add the mounting information of the logical volume in /etc/fstab to automatically mount the logical volume when the system is powered on and restarted.
Run the following command to back up the etc/fstab file

cp /etc/fstab  /etc/fstab.bak

  Add the mount information of the target logical volume in the /etc/fstab file:

echo `blkid <逻辑卷路径> | awk '{print $2}' | sed 's/\"//g'` <逻辑卷挂载点> <逻辑卷的文件系统类型> defaults 0 0 >> /etc/fstab

  The logical volume (the path is /dev/vg_01/lv01) is automatically mounted to the /media/lv01 directory when the computer is powered on and restarted. The file system type is ext4:

echo `blkid /dev/vg_01/lv01 | awk '{print $2}' | sed 's/\"//g'` /media/lv01 ext4 defaults 0 0 >> /etc/fstab

  Check whether the mounting information of the logical volume is added successfully:

cat /etc/fstab
......
UUID=2b1a3a54-2ab8-48f4-a321-3d5bfed63482 /media/lv01 ext4 defaults 0 0

  Verify whether the automatic mounting function is effective
  and remount the file system configured in /etc/fstab. If there is no error output, it means that the logical volume has been successfully mounted to the specified mount point.

mount -a

3.2. Expansion and contraction of logical volumes

  Expand logical volume:
  Expand a logical volume LV (Logical Volume) through LVM (Logical Volume Manager) and
  create a physical volume using other physical disks:

pvcreate <云盘设备名称>
pvcreate /dev/sdc

  Check the volume group information and run the following command to expand the volume group:

vgs
  VG    #PV #LV #SN Attr   VSize VFree
  vg_01   2   1   0 wz--n- 4.36t 4.35t
vgextend <卷组名称> <物理卷名称>

  Add physical volume /dev/sdc to volume group vg_01:

vgextend vg_01 /dev/sdc

  View volume group information:

vgs
  VG    #PV #LV #SN Attr   VSize VFree
  vg_01   3   1   0 wz--n- 5.45t 5.44t

  Expand the logical volume and file system:
  Obtain information such as the logical volume path, name, and volume group to which it belongs. You will need to use it for subsequent steps.

lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg_01/lv01
  LV Name                lv01
  VG Name                vg_01
  LV UUID                4fx5f3-BA2H-yCcE-2mys-p0FW-umQS-0ur97v
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2023-07-13 23:25:53 -0400
  LV Status              available
  # open                 1
  LV Size                10.00 GiB
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  Expand logical volume:

lvextend -L </减逻辑卷容量> <逻辑卷路径>

  Add 5GiB capacity to the logical volume (the path is /dev/vg_01/lv01):

lvextend -L +5G /dev/vg_01/lv01

或者:
lvextend -L 15G /dev/vg_01/lv01

  Expand the logical volume file system:

resize2fs <逻辑卷路径>

  Take the expansion of logical volume lv01 (the path is /dev/vg_01/lv01) as an example:

resize2fs /dev/vg_01/lv01
df -h
Filesystem              Size  Used Avail Use% Mounted on
......
/dev/mapper/vg_01-lv01   15G   41M   14G   1% /media/lv01

  Shrink a logical volume:
  Unmount the logical volume first /dev/mapper/vg_01-lv01:

umount /media/lv01
umount: /media/lv01: target is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))
fuser -m /media/lv01
/media/lv01:          9141c

ps -ef | grep 9141
root      9141  9137  0 Jul13 pts/2    00:00:00 -bash
root     39546 39449  0 02:18 pts/0    00:00:00 grep --color=auto 9141

kill -9 9141
umount /media/lv01

  Check the free space on the logical volume through the e2fsck command:

e2fsck -f /dev/mapper/vg_01-lv01
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/mapper/vg_01-lv01: 12/983040 files (0.0% non-contiguous), 104724/3932160 blocks

  Reduce filesystem to 3G using resize2fs

resize2fs /dev/mapper/vg_01-lv01 3G
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/mapper/vg_01-lv01 to 786432 (4k) blocks.
The filesystem on /dev/mapper/vg_01-lv01 is now 786432 blocks long.

  Use the lvreduce command to reduce the logical volume to 3G:

lvreduce -L 3G /dev/mapper/vg_01-lv01
  WARNING: Reducing active logical volume to 3.00 GiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce vg_01/lv01? [y/n]: y
  Size of logical volume vg_01/lv01 changed from 15.00 GiB (3840 extents) to 3.00 GiB (768 extents).
  Logical volume vg_01/lv01 successfully resized.
mount -a
df -h
Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/vg_01-lv01  2.9G   25M  2.7G   1% /media/lv01

3.3. Volume group shrinkage

  First determine the physical volume to be removed, transfer the data on this physical volume to other physical volumes, and then remove this physical volume from the volume group

pvmove /dev/sdc
  No data to move for vg_01.

  When the pvmove command is completed, use the pvs command again to check whether the physical volume is free.

pvs -o+pv_used
  PV         VG    Fmt  Attr PSize PFree  Used 
  /dev/sda   vg_01 lvm2 a--  2.18t <2.18t 3.00g
  /dev/sdc         lvm2 ---  1.09t  1.09t    0 
  /dev/sde   vg_01 lvm2 a--  2.18t  2.18t    0 

  If it is free, use the vgreduce command to remove the physical volume /dev/sdc from the volume group

vgreduce vg_01 /dev/sdc
  Removed "/dev/sdc" from volume group "vg_01"

  Finally run the pvremove command to remove the disk from the LVM configuration:

pvremove /dev/sdc
  Labels on physical volume "/dev/sdc" successfully wiped.

3.4. Delete logical volumes, volume groups and physical volumes

umount /media/lv01 
lvremove /dev/mapper/vg_01-lv01
Do you really want to remove active logical volume vg_01/lv01? [y/n]: y
  Logical volume "lv01" successfully removed

vgremove vg_01
  Volume group "vg_01" successfully removed

lvs # 执行结果为空
vgs	# 执行结果为空
pvs
  PV         VG Fmt  Attr PSize PFree
  /dev/sda      lvm2 ---  2.18t 2.18t
  /dev/sde      lvm2 ---  2.18t 2.18t

pvremove /dev/sda
  Labels on physical volume "/dev/sda" successfully wiped.
pvremove /dev/sde
  Labels on physical volume "/dev/sde" successfully wiped.

4. VDO (Virtual Data Optimization) volume management

4.1. Introduction to VDO

  VDO (Virtual Data Optimize) is a newly launched storage-related technology on RHEL8/Centos8 (first tested in the 7.5 beta version). It is a technology of Permabit acquired by Redhat.
  The main function of VDO is to save disk space. For example, a 1T disk can hold 1.5T of data, thereby reducing the cost of the data center.
  The key implementation principles of VDO are mainly deduplication and compression. Deduplication means copying the same data to the hard disk. It used to occupy multiple copies of space, but now only requires one copy of space. Similar to how we upload a large software installation package to Baidu Netdisk, it can be transferred instantly. In fact, the historical record already exists, so there is no need to upload the same file again, and there is no need to occupy Baidu's space. On the other hand is data compression, which is similar to the algorithm of compression software and can also save more disk space.
  VDO (Virtual Data Optimizer) is a storage software that can be used as an added storage layer under local file system, iSCSI or Ceph storage. VDO provides inline data reduction for Linux in the form of deduplication, compression, and thin provisioning. When setting up a VDO volume, you can specify a block device to build the VDO volume and set the amount of logical storage.
  ● When used for virtual machines or containers, it is recommended to configure storage with a 10:1 logical to physical ratio: that is, if 1TB of physical storage is used, it will be displayed as 10TB of logical storage.
  ● For object storage, such as the type provided by Ceph, it is recommended to use a 3:1 logical to physical ratio: 1TB of physical storage will appear as 3TB of logical storage. In both cases, just put the file
  system on the VDO provided on top of a logical device and use it directly or as part of a distributed cloud storage architecture.
  Because VDO is provisioned quickly, the file system and applications only see the logical space in use and have no idea of ​​the actual physical space available. Use a script to monitor actual free space and generate an alert when usage exceeds a threshold: for example when a VDO volume is 80% full.

  VDO deployment scenarios:
  KVM:
  VDO can be deployed on a KVM server configured with direct attached storage
Insert image description here

  File system:
  File systems can be created on a VDO and exposed to NFS or CIFS users via an NFS server or Samba.
Insert image description here
  Place VDO on iSCSI:
  An entire VDO storage target can be exported as an iSCSI target to a remote iSCSI initiator
Insert image description here
  . When creating a VDO volume on iSCSI, you can place the VDO volume above or below the iSCSI tier.
  When placing a VDO volume on an iSCSI server (target) below the iSCSI layer:
  ● The VDO volume is transparent to the initiator, similar to other iSCSI LUNs. By hiding thin provisioning and space savings from clients, LUNs are easier to monitor and maintain.
  ● Network traffic is reduced because there are no reads or writes to VDO metadata, and read validation of deduplication recommendations does not occur on the network.
  ● Memory and CPU resources used on iSCSI targets can result in better performance. For example, the ability to host more and more hypervisors as volume reduction occurs on iSCSI targets.
  ● If the client implements encryption on the initiator and has a VDO volume under the target, no space savings will be realized.

  When placing VDO volumes on iSCSI clients (initiators) above the iSCSI layer:
  ● Network traffic in asynchronous mode may be reduced if high space savings are to be achieved.
  ● Can directly view and control space savings and monitor usage.
  ● If you want to encrypt data, for example using dm-crypt, you can implement VDO on top of encryption and take full advantage of space efficiency.

  LVM:
  In the figure below, the VDO target is registered as a physical volume so that it can be managed by LVM. Create multiple logical volumes (LV1 to LV4) from the deduplicated storage pool. This allows VDO to support multi-protocol unified block or file access to the underlying deduplicated storage pool.
Insert image description here

4.2. Deploy VDO

  Install VDO software:

yum install lvm2 kmod-kvdo vdo

  Create VDO volume:
  Next create the VDO volume in the block device, in the following steps vdo-namereplace with the identifier you want to use for the VDO volume, for example vdo1. A different name and device must be used for each VDO instance in the system. If you use non-persistent device names, VDO may not start properly in the future if the device name changes.

vdo create \
--name=vdo-name \
--device=block-device \
--vdoLogicalSize=logical-size

# --name后面跟vdo卷的名称,随便写,好认即可
# --device后面跟真实的物理磁盘
# --vdoLogicalSize后面跟vdo卷的容量,这里按真实物理空间的1.5
vdo create \
--name=vdo-name \
--device=/dev/sdc \
--vdoLogicalSize=50G

Creating VDO vdo-name
Starting VDO vdo-name
Starting compression on VDO vdo-name
VDO instance 0 volume is ready at /dev/mapper/vdo-name

If the physical block device is larger than 16TiB, add the --vdoSlabSize=32G option to increase the slab size on the volume to 32GiB

mkfs.xfs /dev/mapper/vdo-name 

  Mount VDO:

mount /dev/mapper/vdo-name /vodvolume/

  Enable periodic block discarding:
A timer   can be enabled systemdto periodically discard unused blocks in all supported file systems.
  Enable and start systemdthe timer:

systemctl enable --now fstrim.timer

  Verify the status of the timer:

systemctl status fstrim.timer
● fstrim.timer - Discard unused blocks once a week
   Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; vendor preset: disabled)
   Active: active (waiting) since Fri 2023-07-14 04:33:38 EDT; 5s ago
     Docs: man:fstrim

Jul 14 04:33:38 localhost.localdomain systemd[1]: Started Discard unused blocks once a week.

  Monitor VDO:
  Use vdostatstools to get information about VDO volumes:

vdostats --human-readable
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdo-name      1.1T      5.4G      1.1T   0%           99%

  vdostats is used to see the size of the real physical space and the remaining space.

4.3. VDO maintenance

4.3.1. Managing free space of VDO volumes

4.3.2. Starting or stopping a VDO volume

  During the system boot process, vdothe systemdVDO unit automatically starts all VDO devices configured for activation. When you install vdoa package, units are installed and enabled by default vdo systemd. vdo start --allThis unit automatically runs commands to start all activated VDO volumes at system startup .
  You can also create a VDO volume that does not automatically start by vdo createadding the option to the command--activate=disabled

  Boot sequence:
  Some systems may place LVM volumes above or below VDO volumes. In these systems, services need to be started in the correct order:
  ● The lower layers of LVM must be started first. On most systems, this layer is started automatically when the LVM package is installed.
  ● The unit must then vdo systemdbe started.
  ● Finally, additional scripts must be run to start LVM volumes or other services on running VDO volumes.

  Time required to stop a volume: The time
  required to stop a VDO volume varies based on the speed of the storage device and the amount of data the volume needs to write:
  ● Volumes always write approximately 1GiB for every 1GiB of UDS index.
  ● The volume also writes an amount of data equal to the block map cache size, plus a maximum of 8MiB per slab.
  ● The volume must finish processing all outstanding IO requests.

  To start a given VDO volume, use:

vdo start --name=vdo-name
Starting VDO vdo-name
VDO instance 0 volume is ready at /dev/mapper/vdo-name

  To start all VDO volumes, use:

vdo start --all

  Activate a VDO volume (activate a VDO volume so that it starts automatically):
  Activate a specific volume:

vdo activate --name=vdo-name
vdo-name already activated

  Activate all volumes:

vdo activate --all
vdo-name already activated

  Deactivate a specific volume:

vdo deactivate --name=vdo-name

  Deactivate all volumes:

vdo deactivate --all

4.3.3. Select VDO writing mode

  sync mode:
  When VDO is in sync mode, the upper layer thinks that the write command writes data to persistent storage. The result is that the file system or application does not need to deal with data retention at critical failure points, such as issuing FLUSH or force unit access (FUA) requests to retain data (that is, there is no need to flush the cache to write the data from the cache to persistent storage. The upper layer thinks that the write command is written directly to the persistent storage and not to the cache). When the underlying storage can guarantee that the data will be written to the persistent storage when the write command is completed, VDO must be set to sync mode. That is, the storage device does not have a volatile write cache, or it has a write-through cache.
  async mode:
  When VDO is in async mode, VDO does not guarantee that data will be written to persistent storage when a write command is completed. The file system or application must trigger a FLUSH or FUA request on every interaction to ensure that data is persisted in the event of a critical failure. If the underlying storage cannot guarantee data persistence when the command completes, VDO must be set to async mode. That is, the storage device has a volatile write cache.
  auto mode:
  auto mode will automatically select sync mode or async mode according to device properties

  Internal processing of VDO write mode:
  When the kvdo kernel module runs in synchronous mode:
  ● First it temporarily writes the requested data to the allocated block, and then acknowledges the request.
  ● Once the confirmation is complete, an attempt will be made to deduplicate the block by calculating a hash signature of the block data and then send it to the VDO index.
  ● If the VDO index contains a block with the same signature, kvdo reads the found block and performs a byte-by-byte comparison of the two blocks to verify that they are identical.
  ● If they are indeed the same, then kvdo will update its block map so that the logical blocks point to the corresponding physical blocks and free the allocated temporary physical blocks ● If the VDO index does not
  contain the signature of the block to be written, or the block pointed to actually do not contain the same data, kvdo will update its block map to make the temporary physical blocks permanent.
  When the kvdo kernel module is running in asynchronous mode:
  ● It will immediately acknowledge the request rather than write the data
  ● It will then try to check the data block in the same way as above
  ● If the block proves to be a duplicate block, kvdo Update its mapping and free the allocated blocks. Otherwise, it writes the data from the request to the allocated block and updates the block map to make the physical block permanent.

  View the write mode used by the VDO volume:

vdo status --name=vdo-name | grep "write policy"
    Configured write policy: auto
        write policy: sync

# 配置的写入策略: 会有sync、async或auto选项
# 写入策略:是VDO应用的特定的写模式,即sync或async

  Check for volatile cache: You can check for volatile cache
  by looking at the file/sys/block/block-device/device/scsi_disk/identifier/cache_type

cat /sys/block/sda/device/scsi_disk/7:0:0:0/cache_type
write back
cat /sys/block/sdb/device/scsi_disk/1:2:0:0/cache_type
None
cat /sys/block/sdc/device/scsi_disk/0\:3\:109\:0/cache_type 
write through

  ● The device sdaindicates that it has a write-back cache. Usage asyncpatterns for it
  ● The device sdbindicates that it does not have a write-back cache. its usage syncpattern

If the value of cache_type is None or write through, VDO should be configured to use sync write mode

  Set VDO write mode:
  Set the write mode for the VDO volume, either for an existing volume or when creating a new volume

Using incorrect write modes may result in data loss after a power failure, system crash, or unexpected loss of contact with the disk

  To modify an existing VDO volume, use:

vdo changeWritePolicy --writePolicy=sync|async|async-unsafe|auto \
                        --name=vdo-name

  To specify write mode when creating a VDO volume, add the option vdo createto the command--writePolicy=sync|async|async-unsafe|auto

4.3.4. VDO volume recovery and reconstruction

  VDO volume recovery:
  The VDO volume reconstruction operation is completed automatically without manual intervention. VDO will rebuild different writing rules according to the current writing mode. When the VDO volume is started after an abnormal shutdown, VDO will verify the metadata on the volume. consistency, and will rebuild part of the metadata if repair is needed.

  sync:
If the VDO is running on synchronized storage and the write policy is set to sync, all data written to the volume can be fully recovered.
  async:
  If the write policy is asyncYes, some writes may not be recovered if they are not persistent. This can be accomplished by sending VDO FLUSHa command or write I/O with the FUA (Forced Unit Access) flag. You can achieve this in user mode by calling data integrity operations such as fsync, fdatasync, syncor .umount

In either mode, some unacknowledged or unflushed writes may also be reconstructed

  Automatic and manual recovery:
  When a VDO volume enters recovery operation mode, VDO will automatically rebuild the unclean VDO volume after it comes back online. This is called online recovery.
  If VDO cannot successfully recover a VDO volume, it places the volume in a read-only operating mode that remains in effect when the volume is restarted. You need to force a rebuild to resolve this issue manually.

  VDO operating mode:
  You can use the following method to check whether the VDO volume is running normally or recovering from errors.

vdostats --verbose vdo-name | grep mode
  operating mode                      : normal

  normal:
  This is the default operating mode. A VDO volume is always in normalmode unless one of the following states forces a different mode. The newly created VDO volume normalstarts in mode.
  recovering:
  When a VDO volume does not save all of its metadata before shutting down, it will automatically enter recoveringmode the next time it is started. Typical reasons for entering this mode are loss of power or problems with the underlying storage device.
  In recoveringmode, VDO will repair the number of references to each physical block on the device. Recovery usually does not need to be very long. The time depends on the size of the VDO volume, the speed of the underlying storage device, and how quickly other requests to the VDO can be processed simultaneously. VDO volumes generally have the following exceptions:
  ● Initially, the amount of space requested for writing in the volume may be limited. As more metadata is restored, more free space becomes available.
  ● If the data is located in a portion of the volume that has not yet been recovered, data writes during VDO volume recovery may not deduplicate data written before the crash. VDO compresses data when recovering a volume. You can still read or overwrite compressed blocks.
  ● During online recovery, some statistics are not available: for example, blocks in use and blocks free. These statistics are available after the rebuild is complete.
  ● Read and write response times may be slower than usual due to ongoing recovery efforts. VDO volumes
  can be recoveringsafely shut down in mode. If recovery is not completed before shutting down, the device will enter recovery mode again the next time it is powered on.
  When the VDO volume has repaired all references, the VDO volume automatically exits recoveringmode and enters normalmode. No administrator action is required.

  Restore VDO volumes online:
  You can use the following command to restore VDO volumes online:

vdo start --name=vdo-name

  No additional steps are required. Resume running in the background

  Force VDO volume metadata to be rebuilt offline:
  Force VDO volume metadata to be rebuilt offline to facilitate recovery after an abnormal shutdown. The prerequisite must be that the VDO volume is started.

This process may result in data loss of the volume

  Check whether the volume is in read-only mode:

vdo status --name=vdo-name | grep mode
        operating mode: normal

  If the volume is not in read-only mode, there is no need to force an offline rebuild. Performing an Online Recovery
  If the volume is running, stop it:

vdo stop --name=vdo-name

  Restart the volume using --forceRebuildoptions:

vdo start --name=vdo-name --forceRebuild

  Delete failed VDO volumes:
  This process cleans up VDO volumes that are in an intermediate state. If a failure occurs while creating the volume, the volume is in an intermediate state. This may occur under the following circumstances, for example:
  ● System crash
  ● Power failure
  ● Administrator interrupts a running vdo createcommand

  To clean up, use --forcethe option to delete the volume that failed to create:

vdo remove --force --name=vdo-name

  Required --forceoption, as volume creation fails, the administrator may cause a conflict due to changes to the system configuration. Without --forceoptions, vdo removethe command fails

4.3.5. Enabling or disabling compression in VDO

  VDO provides data compression. Enabling it increases space savings. VDO volume compression defaults to on
  unique data when VDO sees it for the first time. Subsequent copies of the stored data are replicated without additional compression steps.
  Enable compression in the VDO volume (compression is enabled by default):

vdo enableCompression --name=vdo-name

  Disable compression on VDO volumes:
  To stop compression on an existing VDO volume, use the following command:

vdo disableCompression --name=vdo-name

  Also when creating a new volume you can vdo createadd --compression=disabledan option to the command to disable compression

4.3.6. Increasing the size of the VDO volume

  The physical size of a VDO volume can be increased to utilize more of the underlying storage capacity, or the logical size can be increased to provide more capacity on the volume.

  Physical size:
  The size of the underlying block device. VDO uses this storage for:
  ● User data, which may be deduplicated and compressed
  ● VDO metadata, such as UDS indexes
  Available physical size:
  This is the fraction of the physical size VDO can use for user data
  It is equal to the physical size Subtract the size of the metadata and then subtract the value that remains after dividing the volume into slabs based on the specified slab sizes.
  Logical size:
  This is the provisioned size of the VDO volume in the application. It is usually larger than the available physical size. If no --vdoLogicalSizeoption is specified, provisioning of logical volumes is now configured as 1:1a ratio of . For example, if you place a VDO volume on a 20GB block device, reserve 2.5 GB for the UDS index (if you use the default index size). The remaining 17.5 GB is provided for VDO metadata and user data. Therefore, no more than 17.5 GB of available storage is consumed, and may be reduced by the metadata that makes up the actual VDO volume.

VDO currently supports any logical volume size up to 254 times the physical volume, but no larger than 4PB

  VDO disk structure:
Insert image description here
  Increase the logical volume size of a VDO volume:
  This process increases the logical size of a given VDO volume. You can first create a VDO volume with a logical size small enough that it is safe from running out of space. Over time, you can evaluate the actual data reduction rate and, if sufficient, increase the logical size of the VDO volume to take advantage of the space savings.

It is not possible to reduce the logical volume size of a VDO volume

  To increase the logical size, use:

# 原VDO卷大小

lsblk
NAME       MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdc          8:32   0   1.1T  0 disk 
└─vdo-name 253:0    0    50G  0 vdo  /vodvolume
vdo growLogical --name=vdo-name \
                  --vdoLogicalSize=100G
lsblk
NAME       MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdc          8:32   0   1.1T  0 disk 
└─vdo-name 253:0    0   100G  0 vdo  /vodvolume

  Increase the physical size of the VDO volume:
  You can increase the amount of physical storage that the VDO volume can use. You cannot reduce the amount of physical storage that the VDO volume uses in this way. First, the capacity of the underlying block device must be greater than the current physical size of the VDO volume.
  Add new physical storage space to the VDO volume:

vdo growPhysical --name=my-vdo

4.3.7. Delete VDO volumes

  To delete a valid VDO volume:
  ● Unmount the file system and stop applications using storage in the VDO volume.
  ● To remove a VDO volume from your system, use:

vdo remove --name=vdo-name

  Delete VDO volumes that failed to be created:
  Clean up VDO volumes that are in an intermediate state. If a failure occurs while creating the volume, the volume is in an intermediate state. This may happen under the following circumstances, for example:
  ● System crash
  ● Power failure
  ● Administrator interrupts running vdo createcommand
  To clean up, use --forcethe option to delete the volume that failed to create:

vdo remove --force --name=vdo-name

  Required --forceoption, as volume creation fails, the administrator may cause a conflict due to changes to the system configuration. Without --forceoptions, vdo removethe command fails

4.4. Testing VDO space savings

4.4.1. Test VDO deduplication function

  First create a VDO test volume:
  A VDO volume of size 2TiB was created in a 1.1TB physical volume for testing purposes.

vdo create --name=vdo-test \
             --device=/dev/sdd \
             --vdoLogicalSize=2T \
             --writePolicy=sync \
             --verbose

  ● To test VDO async mode on top of asynchronous storage, use --writePolicy=asyncthe option to create an asynchronous volume
  ● To test VDO sync mode on top of synchronous storage, use --writePolicy=syncthe option to create a sync volume

lsblk
NAME       MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sdd          8:48   0   1.1T  0 disk 
└─vdo-test 253:1    0     2T  0 vdo  

  Format the new volume using the XFS or ext4 file system:

mkfs.xfs -K /dev/mapper/vdo-test

  Mount the formatted volume:

mkdir /mnt/vdo-test
mount /dev/mapper/vdo-test /mnt/vdo-test && \
  chmod a+rwx /mnt/vdo-test

4.4.2. Test VDO read and write performance

  Write 32 GiB of random data to the VDO volume:

dd if=/dev/urandom of=/mnt/vdo-test/testfile bs=4096 count=8388608
8388608+0 records in
8388608+0 records out
34359738368 bytes (34 GB) copied, 165.781 s, 207 MB/s

  Read data from a VDO volume and write it to another volume:

dd if=/mnt/vdo-test/testfile of=/tmp/testfile bs=4096
8388608+0 records in
8388608+0 records out
34359738368 bytes (34 GB) copied, 27.7768 s, 1.2 GB/s

  Compare these two files:

diff --report-identical-files /mnt/vdo-test/testfile /tmp/testfile 
Files /mnt/vdo-test/testfile and /tmp/testfile are identical

  The command should report that these files are identical

4.4.3. Clean VDO test volume

  Unmount the file system created in the VDO volume:

umount /mnt/vdo-test

  Remove the VDO test volume from the system:

vdo remove --name=vdo-test
Removing VDO vdo-test
Stopping VDO vdo-test

  Verify that the volume has been deleted:

vdo list --all | grep vdo-test

4.5. Discard unused blocks

4.5.1. Types of block discard operations

  Physical discard operations are supported if /sys/block/<device>/queue/discard_max_bytesthe value in the file is non-zero.

cat /sys/block/sdc/queue/discard_max_bytes 
0

  Drop block operations can be run using different methods:
  Bulk drop:
  Triggered explicitly by the user and discards all unused blocks in the selected file system.
  Online Drop:
  Specify at mount time and trigger in real time without user intervention. The online discard operation only discards 已使用blocks from the idle state.
  Periodic discard:
  is systemda batch operation that the service runs periodically

4.5.2. Perform batch block discarding

  Prerequisites: The file system is mounted, and the block device underlying the file system supports the physical ignore operation.
  Perform discarding in the selected file system:

df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/vdo-name   50G   33M   50G   1% /vodvolume

fstrim /vodvolume

  To perform discarding on all mounted file systems, use:

fstrim --all

  If you are using a device that does not support the discard operation, or a logical device (LVM or MD) consisting of multiple devices, any of which does not support the discard operation:

fstrim /mnt/non_discard
fstrim: /mnt/non_discard: the discard operation is not supported

4.5.3. Enable online block discarding

  Online block discarding can be performed to automatically discard unused blocks in all supported file systems. To enable
  online discarding when mounting, add -o discardthe mount option:

mount -o discard device mount-point

  When mounting a file system permanently, add discardthe option to /etc/fstabthe mount entry in the file

4.5.4. Enable periodic block discarding

  A timer can be enabled systemdto periodically discard unused blocks in all supported file systems.
  Enable and start systemdthe timer:

systemctl enable --now fstrim.timer

  Verify the status of the timer:

systemctl status fstrim.timer
● fstrim.timer - Discard unused blocks once a week
   Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; vendor preset: disabled)
   Active: active (waiting) since Fri 2023-07-14 04:33:38 EDT; 1 day 5h ago
     Docs: man:fstrim

Jul 14 04:33:38 localhost.localdomain systemd[1]: Started Discard unused blocks once a week.

4.6. Managing VDO volumes using the web console

4.6.1. Install and enable the web console

cat /etc/redhat-release 
Red Hat Enterprise Linux release 8.5 (Ootpa)
yum install cockpit
systemctl enable cockpit.socket --now
systemctl status cockpit.socket

firewall-cmd --permanent --add-service=cockpit
firewall-cmd --reload

ss -tnlp | grep 9090
LISTEN    0         128                      *:9090                   *:*        users:(("systemd",pid=1,fd=28))       

  Log in to the web console:
Insert image description here

  Just fill in the root account and password of the server

4.6.2. Creating a VDO volume in the web console

  You can refer to the official guidelines, which are basically key operations.

4.6.3. Formatting VDO volumes in the web console

  You can refer to the official guidelines, which are basically key operations.

4.6.4. Expanding VDO volumes in the web console

  You can refer to the official guidelines, which are basically key operations.

5. AutoFS automatic mounting service

  When we use Linux, if we want to access hardware resources, we need to use the mount command to mount the hardware resources and map them to a directory, and then we can access and use the storage medium. If you use samba or NFS service, you also need to mount the remote storage device. Mounting is a necessary step when using external storage media or file systems. However, if too many resources are mounted, it will cause a certain load on network resources and server resources, thereby reducing server performance.
  In order to solve this problem, we can use the autofs service. Autofs is a system daemon. We can write the mounting information into its configuration file. If the user does not access other storage media, the system will not mount. If the user tries to access the storage medium, autofs will automatically perform the mount operation. All the above operations are transparent to the user. In this way, the autofs service saves the server's network and hardware resources. AutoFS is a tool for automatically mounting file systems, typically used for NFS (Network File System) mounts.
   The steps are as follows:
   First install the NFS server in the serverb server and create a shared directory as the shared storage of the servera

[root@serverb ~]# mkdir -p /rhome/ldapuser0
[root@serverb ~]# chmod 777 /rhome/ldapuser0/
[root@serverb ~]# vim /etc/exports
/rhome/ldapuser0 *(rw)

[root@serverb ~]# systemctl restart nfs-server.service
[root@serverb ~]# systemctl enable nfs-server.service

[root@serverb ~]# firewall-cmd --permanent --add-service=nfs
[root@serverb ~]# firewall-cmd --permanent --add-service=mountd
[root@serverb ~]# firewall-cmd --permanent --add-service=rpc-bind
[root@serverb ~]# firewall-cmd --reload

  Next, install autofs in servera and create an ldapuser0 user. The ldapuser0 user can automatically mount the NFS shared directory after logging in to the system.

[root@servera ~]# showmount -e serverb.lab.example.com
Export list for serverb.lab.example.com:
/rhome/ldapuser0 *
[root@servera ~]# useradd ldapuser0
[root@servera ~]# vim /etc/passwd
ldapuser0:x:1002:1002::/rhome/ldapuser0:/bin/bash

  The main configuration file of the autofs service is /etc/auto.master. After opening, you need to write the format in the main configuration file:

[root@servera ~]# yum install autofs.x86_64
[root@servera ~]# vim /etc/auto.master
/rhome  /etc/auto.nfs

[root@servera ~]# vim /etc/auto.nfs
ldapuser0          -rw       serverb.lab.example.com:/rhome/ldapuser0
[root@servera ~]# systemctl enable autofs.service --now
[root@servera ~]# systemctl restart autofs.service

  Verification effect: After logging in to the ldapuser0 user, check that the mounted file has automatically mounted the NFS shared file of the serververb.

[root@servera ~]# su - ldapuser0
Last login: Fri Sep 29 00:40:22 CST 2023 on pts/0
[ldapuser0@servera ~]$
[ldapuser0@servera ~]$ df -Th
Filesystem                               Type      Size  Used Avail Use% Mounted on
devtmpfs                                 devtmpfs  887M     0  887M   0% /dev
tmpfs                                    tmpfs     914M     0  914M   0% /dev/shm
tmpfs                                    tmpfs     914M   17M  897M   2% /run
tmpfs                                    tmpfs     914M     0  914M   0% /sys/fs/cgroup
/dev/vda3                                xfs       9.9G  1.6G  8.4G  16% /
/dev/vda2                                vfat      100M  6.8M   94M   7% /boot/efi
tmpfs                                    tmpfs     183M     0  183M   0% /run/user/1000
serverb.lab.example.com:/rhome/ldapuser0 nfs4      9.9G  1.6G  8.4G  16% /rhome/ldapuser0

Guess you like

Origin blog.csdn.net/wangzongyu/article/details/133421024