Using ZFS on Linux

system message

cat /etc/os-release

NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"

Disk Information

As used herein, the three 1T ssd solid disk, the disk information is as follows:

Disk /dev/sdb: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk /dev/sdc: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors
Disk /dev/sdd: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors

Description ⚠️: This article focuses on the basic ZFS deployment and use is described in detail, about the concepts of ZFS, please see the final document of reference links in this article -

ZFS Installation Service

apt install zfsutils-linux -y

apt install nfs-kernel-server

Create pool pool

In ZFS, pool pool equivalent to RAID. Creation and use of pool pool is very simple and flexible, ZFS provides a lot of parameters available for us to choose.

Create different usage scenarios pool pool

  • (1) ZFS to achieve RAID0, simply create a common pool:sudo zpool create your-pool /dev/sdb /dev/sdc /dev/sdd

  • Use the mirror keyword (2) ZFS is implemented in RAID1 functionality:sudo zpool create your-pool mirror /dev/sdb /dev/sdc

  • (3) ZFS will RAID5 functionality is implemented as RAIDZ1:sudo zpool create your-pool raidz1 /dev/sdb /dev/sdc /dev/sdd

  • (4) ZFS will RAID6 functionality is implemented as RAIDZ2:sudo zpool create your-pool raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde

  • (5) ZFS will RAID6 functionality is implemented as a double mirror Keywords:sudo zpool create your-pool mirror /dev/sdc /dev/sdd mirror /dev/sde /dev/sdf

Description ⚠️:

  • (1) This paper explains the basic ZFS deploy and use, on RAID concepts and knowledge, please refer to the final reference document links in this article -

  • (2) use of RAID1, disk utilization is only 50%;

  • (3) using at least disk RAID5 claim 3;

  • (4) RAID6 and RAID5 nearly identical, but it requires at least four disks;

  • (5) RAID10 requires at least four discs, but only half of the space, i.e., the disk utilization rate of 50%;

Create a real pool RAIDZ1 class

The main steps are as follows:

(1) See bare disc ID NO: ll /dev/disk/by-id/details the following example:

wwn-0x5002498e20d23d09 -> ../../sdb

wwn-0x5002498e29d76d78 -> ../../sdc

wwn-0x5002498e27d45d91 -> ../../sdd

(2) create a pool pool RAIDZ1 class

sudo zpool create -f data_ssd raidz wwn-0x5002498e20d23d09 wwn-0x5002498e29d76d78 wwn-0x5002498e27d45d91

Description ⚠️: we can df -hsee the command to data_pool pool has been created and mounted, the interception of critical information as follows:

data_ssd 1.8T 128K 1.8T 1% /data_ssd

There are several points that require attention at:

  • Original bare disc number is three, a total size of 3T, disk space may be used is about 2T. This is because we use the RAIDZ1 (equivalent to RAID5);

  • Mount path / data_ssd without prior creation;

(3) View pool pool status

sudo zpool status, Details are as follows:

pool: data_ssd
 state: ONLINE
  scan: none requested
config:

    NAME                        STATE     READ WRITE CKSUM
    data_ssd                    ONLINE       0     0     0
      raidz1-0                  ONLINE       0     0     0
        wwn-0x5002498e20d23d09  ONLINE       0     0     0
        wwn-0x5002498e29d76d78  ONLINE       0     0     0
        wwn-0x5002498e27d45d91  ONLINE       0     0     0

errors: No known data errors

(4) compression enabled pool pool

zfs set compression=on data_ssd

(5) enable the sharing pool pool

zfs set sharenfs=on data_ssd

After enable sharing, ZFS file system that can be shared as NFS and SMB to the remote host using ~

(6) to view the properties of the storage pool

sudo zfs get all data_ssd, Simply list a few lines for reference:

NAME      PROPERTY              VALUE                  SOURCE
data_ssd  type                  filesystem             -
data_ssd  creation              Thu Aug 15  7:07 2019  -
data_ssd  used                  21.5G                  -
data_ssd  available             1.73T                  -
data_ssd  referenced            30.6K                  -
data_ssd  compre***atio         1.00x                  -
data_ssd  mounted               yes                    -
data_ssd  quota                 none                   default
data_ssd  reservation           none                   default
data_ssd  recordsize            128K                   default
data_ssd  mountpoint            /data_ssd              default
data_ssd  sharenfs              on                     local
data_ssd  checksum              on                     default
data_ssd  compression           on                     local

Description ⚠️: We can see compre***atioand sharenfsare available state -

(7) Creating a ZFS File System

zfs create data_ssd/test

Description ⚠️: Mount path / data_ssd / test without creating advance

(8) View ZFS file system information

zfs get all data_ssd/test

(9) close ZFS file system compression feature

zfs set compression=off data_ssd/test

(10) View ZFS pool and file system space usage information

zfs listorzfs list data_ssd/test

(11) Removing ZFS File System

zfs destroy data_ssd/test

(12) Delete ZFS pool

zpool destroy data_ssd

Spread

  • A block SN code on the disc viewing system, such as this disk sdc:hdparm -i /dev/sdc

  • Close test / enable ZSF compression, disk read and write speed:

time dd if=/dev/zero bs=1024000 count=100000 of=100GB.file <== See reference document links

Reference Documents

At last

ZFS is very powerful, support and application scenarios also very rich, the above operation is not complete, then slowly back rich and optimization. If wrong, please correct me big brother, thank you ~

Another reference documentation if infringement, please contact me, stand deleted! Thanks to open source, open source embrace ~

Guess you like

Origin blog.51cto.com/wutengfei/2429887