Proxmox VE 7.0 advanced installation and system disk partition - ZFS (medium)

2.2. ZFS file system installation method

Starting from Proxmox VE 3.4, Proxmox VE added ZFS file system as optional file system and root file system. The official ISO image provided by Proxmox VE has integrated the software packages required by ZFS, and users can use ZFS directly without manual compilation.

2.2.1. ZFS installation

Proxmox VE 7.0 supports ZFS installation. Unlike ext4/xfs installation, ZFS does not use volume manager (LVM) physical storage, but uses storage pool (ZFS pool) to manage physical storage.

ZFS aggregates storage devices into storage pools rather than mandating the creation of virtual volumes. A storage pool has the physical characteristics of a storage device, such as a RAID level, and acts as an arbitrary data repository from which a file system can be created. File systems are no longer limited to a single device, but share disk space with all file systems of a storage pool.

Step 1: Select zfs(RAID1)

Select the file system "zfs(RAID1)" to install the system, as shown in Figure 1.

insert image description here

Figure 1. zfs(RAID1) installation

ZFS offers multiple soft levels of RAID, which is especially handy if your server doesn't have a hardware RAID card. You can set the RAID level of ZFS through the Options button, and select hard disk devices in the disk list to form a ZFS file system as needed. Note: ZFS does not support any hardware RAID and may result in data loss.

Choose zfs (RAID1), RAID1 requires at least two hard disks (about the characteristics of RAID1, you can search for relevant content through search engines to understand), that is to say, the target hard disk in Figure 3 is a group of RAID1 hard disks, and the use space is only 1 hard disk, used as a boot disk, and another hard disk used as a backup disk.

When you install using the Proxmox VE installer, you can choose to use ZFS as the root filesystem. At the same time you need to choose to configure the RAID level during the installation process. In Proxmox VE 7.0, there are six types of RAID levels supported by the ZFS file system, as shown in Figure 2.

insert image description here

Figure 2. RAID level types

zfs(RAID0) : Also known as "striped" mode. In this mode, the capacity of the ZFS volume is the sum of the capacities of all hard disks. However, RAID0 does not provide any redundancy, and the failure of any hard disk in a ZFS volume will cause the entire volume to be unavailable. In this mode, at least one hard disk is required.

zfs(RAID1) : Also known as "mirror" mode. In this mode, data will be copied to all hard disks at the same time. In this mode, at least two hard disks with the same capacity are required, and the capacity of the entire volume is equal to the capacity of a single hard disk.

zfs(RAID10) : This mode is a combination of RAID0 and RAID1. In this mode, at least 4 hard drives are required.

zfs(RAIDZ-1) : This mode is a variant mode of RAID5, which provides single parity, that is, it allows 1 hard disk failure redundancy. In this mode, at least 3 hard drives are required.

zfs(RAIDZ-2) : This mode is a variant mode of RAID5, which provides double parity, which allows 2 hard disk failure redundancy. In this mode, at least 4 hard drives are required.

zfs(RAIDZ-3) : This mode is a variant mode of RAID5, which provides triple parity, which allows 3 hard disk failure redundancy. In this mode, at least 5 hard disks are required.

If you choose to use the advanced option "Advanced Optinos" for installation, you can further set ZFS advanced configuration parameters.

Step 2: ZFS Advanced Configuration Parameters

The Proxmox VE 7.0 installer automatically creates a ZFS storage pool named rpool. When using ZFS, no swap space will be created by default, so it is strongly recommended that you configure enough physical memory for ZFS to avoid insufficient available memory in the system. If you really want to create a swap partition, you can reserve some unpartitioned space to create a swap, or you can manually create a swap zvol volume after the installation is complete, but be aware that manually creating a swap may cause problems (see "pve-admin- guide-7" user manual page 53 SWAP on ZFS chapter).

If you want Proxmox VE to automatically partition and format the target hard disk, you can ignore the "Advanced Optinos" advanced option and click "Next" to install it.

If you want to adjust ZFS parameters, you can also choose "Advanced Optinos" advanced option, "Advanced Optinos" will automatically configure according to your current hard disk space, otherwise you can install it directly without making any changes, as shown in Figure 3 shown.

insert image description here

Figure 3. ZFS alert configuration parameters

The advanced configuration parameters for ZFS are as follows:

ashift : Defines the ashift value of the storage pool. The ashift value is set to at least the sector size of the disk in the storage pool (the power of 2 ashift is the sector size), or the sector size of any disk newly added to the storage pool (for example, when replacing a failed disk).

insert image description here
For example, if the sector is 512 bytes, then ashift is equal to 9, and if the sector is 4096 bytes, then ashift is equal to 12.
compress: Define whether to enable compression for rpool.

checksum : Defines the type of checksum algorithm used by rpool.

Copies : defines the copy parameters of rpool. This parameter cannot replace the disk redundancy function. Please refer to the man manual for specific reasons and configuration syntax.

hdsize : Define the size of the target hard disk. By setting this parameter, you can reserve some space on the hard disk for other use (such as creating a swap partition). hdsize applies only to boot disks (boot disks), such as the first disk or mirror in RAID0, RAID1, or RAID10, or all disks in RAID-Z [123].

Step Three: ZFS Performance Tips

ZFS consumes a lot of memory resources. If you want to use ZFS as storage, you generally need to configure at least 8GB of memory for ZFS. In actual production, it is best to configure as much memory as possible based on the budget. It is recommended to add 1GB of memory for every 1TB of raw hard disk capacity based on 8GB. If you want to configure a separate cache disk or file system log disk for ZFS, it is best to use an enterprise-level SSD disk, which can greatly improve overall performance.

2.2.2. View ZFS installation disk partition parameters

After the installation of Proxmox VE 7.0 is complete, let's take a look at the disk partitions of the Proxmox VE server host, as shown in Figure 4, Figure 5 and Figure 6.

insert image description here

Figure 4. Default Disk Partition - Web UI

insert image description here

Figure 5. Default disk partition - system disk

insert image description here

Figure 6. Default disk partition - unpartitioned disk

From Figure 5 and Figure 6, we can see that there are only hard disk devices and partition information actually stored like /dev/sda, and there are no logical devices like /dev/mapper.

We also found from the WEB UI that there is no logical volume partition in LVM and LVM-thin. This is mainly because ZFS does not use LVM for management, but uses storage pools for management, as shown in Figure 7.

insert image description here

Figure 7. ZFS no longer uses LVM management

2.2.3. Default storage location

In the case of ZFS installation, Proxmox VE uses ZFS as the storage method. After the target hard disk is divided into BIOS boot and EFI partitions during installation, a ZFS storage pool named rpool is automatically created in the third partition /dev/sda3. In other words, rpool is a physical disk group, a partition, and a physical device, as shown in Figure 8 and Figure 9.

insert image description here

Figure 8. The ZFS storage pool is built on the sda3 partition

insert image description here

Figure 9. ZFS storage pool

Then, on the basis of the rpool storage pool /dev/sda3, Proxmox VE establishes two storage points, a directory named local, which is mainly used to save VZDump backup files, ISO images, container templates, etc.; a directory named local-zfs The ZFS storage pool is mainly used to save block storage-based virtual machine images, container volumes, etc., that is, as a virtual machine disk.

The storage path is as follows:

The path where the ISO image is stored : /var/lib/vz/template/iso

Backup storage path : /var/lib/vz/dump/

Virtual machine file storage : /dev/rpool/data

The information of local and local-lzfs storage points in the WEB UI interface is shown in Figure 10, Figure 11 and Figure 12.

insert image description here

Figure 10. Storage contents of local and local-zfs

insert image description here

Figure 11. Local storage content and space

insert image description here

Figure 12. Storage content and space of local-zfs

As we mentioned above, both local and local-zfs are built on the basis of the rpool storage pool /dev/sda3. In other words, the rpool storage pool is a shared storage pool composed of local and local- Commonly used by zfs. Is this the actual situation? Let's use this method to verify: upload an image in local and create a virtual machine in local-zfs, then the sum of the capacity occupied by local and the capacity occupied by local-zfs is equal to the capacity occupied by the rpool storage pool. The formula is as follows:

insert image description here
Step 1: View the space capacity used locally

In local, let’s check that after uploading an image of the ubuntu operating system, the space used is 3.43GB, as shown in Figure 13.

insert image description here

Figure 13. Local used capacity

Step 2: View the space capacity used by local-zfs

In local-zfs, let's create a virtual machine and install an operating system on the virtual machine, and then check that the space used by local-zfs is 0.46GB, as shown in Figure 14.

insert image description here

Figure 14. The used capacity of local-zfs

Step 3: View the space capacity used by rpool

In the rpool storage pool, let's check the space capacity of the rpool storage pool. The used space capacity is 3.91GB, as shown in Figure 15.

insert image description here

Figure 15. The used capacity of rpool

According to the above data, we can conclude that the sum of the space capacity used by local and the space capacity used by local-zfs is equal to the space capacity used by rpool. This further verifies that local and local-zfs share the rpool storage pool.

2.2.4. Create a new ZFS storage pool zfs-pool

We have just verified that the sum of the capacity used by local and the capacity used by local-zfs is equal to the capacity used by rpool, so the rpool storage pool is a shared storage pool shared by local and local-zfs. So, what is the proportion of local and local-zfs in rpool storage? Can it be set manually? So far, it is not possible to make settings in the WEB UI and on the command line. I also checked the relevant information on the official website, and also crawled to the Internet to search for information, but found that there was no relevant introduction in this regard.

In fact, the way that local and local-zfs share an rpool storage pool cannot be said to be bad. The main problem is that this rpool storage pool is created on /dev/sda3 of the boot disk, which is not a good way. We can partition in this way, the local exclusive rpool storage pool, that is, exclusive /dev/sda3, and local-zfs can be created on other hard disks.

Step 1: Clear the virtual machine files in local-zfs first , as shown in Figure 16 and Figure 17.

insert image description here

Figure 16. Delete virtual machine files

insert image description here

Figure 17. Confirm deletion of virtual machine files

Step 2: Delete the local-zfs storage pool

insert image description here

Figure 18. Deleting the local-zfs storage pool

After deleting the local-zfs storage pool, let's look at the large storage pool rpool. This rpool storage pool will not be deleted because local is still in use, as shown in Figure 19.

insert image description here

Figure 19. The rpool storage pool is still reserved

When local-zfs is deleted, local will exclusively enjoy the entire rpool storage pool. In this case, local will have enough space to store ISO images, container templates, etc.

Step 3: Check which hard drives are not in use

Before creating a storage pool for virtual machine files, we need to know which hard disks are not in use, and then gather these hard disks to create a storage pool, as shown in Figure 20.

insert image description here

Figure 20. Disks that are not currently being used

Step 4: Create a lisq-zfs storage pool for storing virtual machine files

As shown in Figure 21, on the ZFS storage pool interface, click the "Create: ZFS" button to create a ZFS storage pool.

insert image description here

Figure 21. Creating a new ZFS storage pool

In the creation template, enter "lisq-zfs" for the storage pool name, "RAIDZ2" for the RAID level, and select the four unused hard disks, as shown in Figure 22.

insert image description here

Figure 22. Create lisq-zfs storage pool

After the storage pool lisq-zfs is created, the detailed information of lisq-zfs will appear in the right window, and the storage ID of lisq-zfs will appear in the left navigation window, as shown in Figure 23 and Figure 24.

insert image description here

Figure 23. lisq-zfs storage pool

insert image description here

Figure 24. lisq-zfs storage pool details

In the "Data Center→Storage" interface, we can find that lisq-zfs is automatically added, as shown in Figure 25.

insert image description here

Figure 25. lisq-zfs added automatically

Step 5: Create a virtual machine in the lisq-zfs storage pool

Create a virtual machine in the lisq-zfs storage pool, and the virtual machine files are stored in the lisq-zfs storage, as shown in Figure 26, Figure 27 and Figure 28.

insert image description here

Figure 26. Creating a virtual machine in lisq-zfs

insert image description here

Figure 27. Virtual machine files stored in lisq-zfs

insert image description here

Figure 28. Virtual machine files stored in lisq-zfs

2.2.5. Summary of PVE default storage

local is a directory, and the storage path is /var/lib/vz. Through /var/lib/vz, the ISO image can be stored in the rpool storage pool. The path correspondence is shown in Figure 29.

insert image description here

Figure 29. Local storage path

local-zfs is a storage pool, a disk pool, and a physical device. When the system creates a virtual machine, it allocates disk space from the local-zfs storage pool to the hard disk of the virtual machine. The hard disk of the virtual machine is equivalent to the local-zfs storage pool. Block storage in , as shown in Figure 30.

insert image description here

Figure 30. local-zfs stored in rpool

From here we can find that local-zfs has a storage path, that is, the data is stored in /dev/rpool/data, as shown in Figure 31.

insert image description here

Figure 31. Local-zfs storage path

Do you feel that these mapping relationships are messy? In fact, it is understandable, because whether it is the WEB UI interface of Proxmox VE or the CLI interface, the display of this part of the content is somewhat incomplete, and many people are a little confused in understanding, and they have to rely on imagination. Next, I sort out these mapping relationships and make a table, hoping to help everyone learn Proxmox VE, as shown in Figure 32.

insert image description here

Figure 32. Default storage point relationship

Reference: "pve-admin-guide-7" user manual on Proxmox VE official website;

Guess you like

Origin blog.csdn.net/jianghu0755/article/details/129651441