Implement advanced storage capabilities

Implement advanced storage capabilities

Managing tiered storage with Stratis

Stratis

Local storage management solution for Linux. Designed to provide a more convenient way to perform initial configuration of storage, make modifications to storage configuration, and use advanced storage features.

Stratis runs as a service that manages a pool of physical storage devices and transparently creates and manages volumes for newly created file systems.

Insert image description here

Using Stratis Storage

To use the Stratis storage management solution to manage file systems, please install the stratis-cli and stratisd packages.

View block devices

[root@clear ~]# lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda    252:0    0   10G  0 disk 
├─vda1 252:1    0    1M  0 part 
├─vda2 252:2    0  100M  0 part /boot/efi
└─vda3 252:3    0  9.9G  0 part /
vdb    252:16   0   10G  0 disk 
vdc    252:32   0   10G  0 disk 
vdd    252:48   0   10G  0 disk 

Install and enable
[root@clear ~]# yum install stratis-cli stratisd -y
[root@clear ~]# systemctl enable --now stratisd

Assemble block storage into a Stratis pool
#创建一个池
[root@clear ~]# stratis pool create pool1 /dev/vdb

Each pool is a subdirectory of the /stratis directory

View the list of available pools

[root@clear ~]# stratis pool list
Name                  Total Physical
pool1  10 GiB / 37.63 MiB / 9.96 GiB

Add additional block devices to the pool

[root@clear ~]# stratis pool add-data pool1 /dev/vdc

View block devices in the pool

[root@clear ~]# stratis blockdev list pool1
Pool Name  Device Node  Physical Size  Tier
pool1      /dev/vdb            10 GiB  Data
pool1      /dev/vdc            10 GiB  Data

Managing Stratis file systems

Create a file system for the pool

[root@clear ~]# stratis filesystem create pool1 fs1

Stratis file system link is located /stratis/pool1below

View a list of available file systems

[root@clear ~]# stratis filesystem list
Pool Name  Name  Used     Created            Device              UUID                            
pool1      fs1   546 MiB  Aug 06 2023 11:50  /stratis/pool1/fs1  649e6e0f7ddd41cd8e7312536d99ee3c

Create a snapshot of the file system

[root@clear ~]# stratis filesystem snapshot pool1 fs1 snapshot1

Persistently mount Stratis file system

To ensure that the Stratis file system is mounted persistently, edit /etc/fstaband specify the file system information.

[root@clear ~]# lsblk --output=UUID /stratis/pool1/fs1
UUID
649e6e0f-7ddd-41cd-8e73-12536d99ee3c

Persistent mount example

[root@clear ~]# cat /etc/fstab | grep stratis
UUID=649e6e0f-7ddd-41cd-8e73-12536d99ee3c	/dir1	xfs	defaults,x-systemd.requires=stratisd.service	0	0

The x-systemd.requires=stratisd.service option delays mounting until systemd starts stratisd.service during startup.

[root@clear ~]# mkdir /dir1
[root@clear ~]# mount -a
[root@clear ~]# df -h
Filesystem                                                                                       Size  Used Avail Use% Mounted on
devtmpfs                                                                                         887M     0  887M   0% /dev
tmpfs                                                                                            914M     0  914M   0% /dev/shm
tmpfs                                                                                            914M   17M  897M   2% /run
tmpfs                                                                                            914M     0  914M   0% /sys/fs/cgroup
/dev/vda3                                                                                        9.9G  1.9G  8.1G  19% /
/dev/vda2                                                                                        100M  6.8M   94M   7% /boot/efi
tmpfs                                                                                            183M     0  183M   0% /run/user/0
/dev/mapper/stratis-1-5ef6680b3e394818a4338f650e374f0b-thin-fs-649e6e0f7ddd41cd8e7312536d99ee3c  1.0T  7.2G 1017G   1% /dir1

The df command will report the size of any new xfs file system managed by Stratis as 1TiB, regardless of the amount of physical storage currently allocated to the file system.

View actual storage

[root@clear ~]# stratis pool list
Name                  Total Physical
pool1  20 GiB / 1.11 GiB / 18.89 GiB

Store and deduplicate data using VDO compression

Virtual Data Optimizer

VDO can optimize the space occupied by data on block devices. VDO is a Linux device mapper driver that reduces disk space usage on block devices while minimizing data duplication, thereby saving disk space and even increasing data throughput. VDO kernel module: kvdo is used to control data compression in a transparent manner, and uds is used for deduplication.

VDO sits on top of a block device (RAID or local disk).

Insert image description here

Implement data seeking optimizer

Logical devices created using VDO are called VDO volumes. A VDO volume is similar to a disk partition; the volume can be formatted into the required file system and mounted, or the VDO volume can be used as an LVM physical volume.

Enable VDO
#安装VDO和kmod-kvdo软件包
[root@clear ~]# yum install vdo kmod-kvdo -y

Create VDO volume
[root@clear ~]# vdo create --name=vdo1 --device=/dev/vdd --vdoLogicalSize=5G
Creating VDO vdo1
      The VDO volume can address 6 GB in 3 data slabs, each 2 GB.
      It can grow to address at most 16 TB of physical storage in 8192 slabs.
      If a larger maximum size might be needed, use bigger slabs.
Starting VDO vdo1
Starting compression on VDO vdo1
VDO instance 0 volume is ready at /dev/mapper/vdo1

You can format it to the required file system and mount it

Analyze VDO volumes
[root@clear ~]# vdo status --name=vdo1
VDO status:
  Date: '2023-08-06 12:45:25-04:00'
  Node: clear.domain250.example.com
Kernel module:
  Loaded: true
  Name: kvdo
  Version information:
    kvdo version: 6.2.2.117
Configuration:
  File: /etc/vdoconf.yml
  Last modified: '2023-08-06 12:42:31'
VDOs:
  vdo1:
    Acknowledgement threads: 1
    Activate: enabled
    Bio rotation interval: 64
    Bio submission threads: 4
    Block map cache size: 128M
    Block map period: 16380
    Block size: 4096
    CPU-work threads: 2
    Compression: enabled
    Configured write policy: auto
    Deduplication: enabled
    Device mapper status: 0 10485760 vdo /dev/vdd normal - online online 1049638 2621440
...output omitted...

Guess you like

Origin blog.csdn.net/weixin_51882166/article/details/132133642