1 . The host increases 80G SCSI interface hard disk
2 . Dividing each of three 20G primary partition
[root@localhost ~]# parted /dev/sdb
(parted) mklabel
The new disk label type? gpt
(parted) mkpart
Partition name? []? Sdb1
File system type? [ext2]? ext4
The starting point? 1G
The end point? 20G
(parted) mkpart
Partition name? []? Sdb2
File system type? [ext2]? ext4
The starting point? 21G
The end point? 40G
(parted) mkpart
Partition name? []? Sdb3
File system type? [ext2]? ext4
The starting point? 41G
The end point? 60G
(parted) p
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sdb: 85.9GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name 标志
1 1000MB 20.0GB 19.0GB sdb1
2 21.0GB 40.0GB 19.0GB sdb2
3 41.0GB 60.0GB 19.0GB sdb3
(parted) q
信息: You may need to update /etc/fstab.
3 . Converting the three primary partitions of the physical volume ( the pvcreate ), physical volume scanning system
[root@localhost ~]# pvcreate /dev/sdb[123]
Physical volume "/dev/sdb1" successfully created.
Physical volume "/dev/sdb2" successfully created.
Physical volume "/dev/sdb3" successfully created.
[root@localhost ~]# pvscan
PV /dev/sda2 VG centos lvm2 [<39.00 GiB / 4.00 MiB free]
PV /dev/sdb2 lvm2 [<17.70 GiB]
PV /dev/sdb1 lvm2 [17.69 GiB]
PV /dev/sdb3 lvm2 [17.69 GiB]
Total: 4 [92.08 GiB] / in use: 1 [<39.00 GiB] / in no VG: 3 [53.08 GiB]
4 . Two physical volumes to create a volume group, the name for myvg , view the volume group size
[root@localhost ~]# vgcreate myvg /dev/sdb[12]
Volume group "myvg" successfully created
[root@localhost ~]# vgdisplay myvg
--- Volume group ---
VG Name myvg
System ID
format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 35.38 GiB
PE Size 4.00 MiB
Total PE 9058
Alloc PE / Size 0 / 0
Free PE / Size 9058 / 35.38 GiB
VG UUID lqeazi-gvko-Du1i-y0NA-91ci-7824-maQyXe
5. Create a logical volume mylv , size 30G
[root@localhost ~]# lvcreate -L 30G -n mylv myvg
Logical volume "mylv" created.
6 . The logical format to xfs file system, and mount it to / data on the directory, create a file test
[root@localhost ~]# mkfs.xfs /dev/myvg/mylv
meta-data = / dev / myvg / mylv help = 512 agcount = 4, agsize = 1966080 blks
= sectsz=512 attr=2, projid32bit=1
Crc finobt = = 1 = 0, sparse = 0
data = bsize=4096 blocks=7864320, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=3840, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@localhost ~]# df -Th
File System Types Capacity Used Available Used % loading point
/dev/mapper/centos-root xfs 37G 3.9G 33G 11% /
devtmpfs devtmpfs 1.2G 0 1.2G 0% /dev
tmpfs tmpfs 1.2G 0 1.2G 0% /dev/shm
tmpfs tmpfs 1.2G 11M 1.2G 1% /run
tmpfs tmpfs 1.2G 0 1.2G 0% /sys/fs/cgroup
/dev/sda1 xfs 1014M 166M 849M 17% /boot
tmpfs tmpfs 245M 24K 245M 1% /run/user/0
/dev/sr0 iso9660 4.3G 4.3G 0 100% /run/media/root/CentOS 7 x86_64
/dev/mapper/myvg-mylv xfs 30G 33M 30G 1% /data
7 . Logical Volume increased to 35G
[root@localhost ~]# lvextend -L +5G /dev/myvg/mylv
Size of logical volume myvg/mylv changed from 30.00 GiB (7680 extents) to 35.00 GiB (8960 extents).
Logical volume myvg/mylv successfully resized.
[root@localhost ~]# xfs_growfs /dev/myvg/mylv
meta-data = / dev / mapper / myvg mylv help = 512 agcount = 4, agsize = 1966080 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0 spinodes=0
data = bsize=4096 blocks=7864320, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal bsize=4096 blocks=3840, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 7864320 to 9175040
[root@localhost ~]# df -Th
File System Types Capacity Used Available Used % loading point
/dev/mapper/centos-root xfs 37G 3.9G 33G 11% /
devtmpfs devtmpfs 1.2G 0 1.2G 0% /dev
tmpfs tmpfs 1.2G 0 1.2G 0% /dev/shm
tmpfs tmpfs 1.2G 11M 1.2G 1% /run
tmpfs tmpfs 1.2G 0 1.2G 0% /sys/fs/cgroup
/dev/sda1 xfs 1014M 166M 849M 17% /boot
tmpfs tmpfs 245M 24K 245M 1% /run/user/0
/dev/sr0 iso9660 4.3G 4.3G 0 100% /run/media/root/CentOS 7 x86_64
/dev/mapper/myvg-mylv xfs 35G 33M 35G 1% /data
8 . Edit / etc / fstab file to mount logical volumes, disk quotas and support options
[root@localhost ~]# vim /etc/fstab
/dev/myvg/mylv /data xfs defaults,usrquota,grpquota 0 0
9 . Create a disk quota, crushlinux user / data directory under the file small soft limit is 80M , the hard limit for the 100M ,
crushlinux user / data directory soft limit the number of files to 80 th, a hard limit of 100 th.
[root@localhost ~]# mount -o remount,usrquota,grpquota /data1
/dev/sdb3 /data1 ext4 defaults,usrquota,grpquota 0 0
[root@localhost ~]# mount |grep /data1
/dev/sdb3 on /data1 type ext4 (rw,relatime,seclabel,quota,usrquota,grpquota,data=ordered)
[root@localhost ~]# quotacheck -avug
quotacheck: Skipping /dev/mapper/myvg-mylv [/data]
quotacheck: Your kernel probably supports journaled quota but you are not using it. Consider switching to journaled quota to avoid running quotacheck after an unclean shutdown.
quotacheck: Scanning /dev/sdb3 [/data1] done
quotacheck: Can not stat old user quota file /data1/aquota.user: No such file or directory Usage will not be subtracted..
quotacheck: Can not stat old group quota file /data1/aquota.group: No such file or directory Usage will not be subtracted..
quotacheck: Can not stat old user quota file /data1/aquota.user: No such file or directory Usage will not be subtracted..
quotacheck: Can not stat old group quota file /data1/aquota.group: No such file or directory Usage will not be subtracted..
quotacheck: Checked 3 directories and 0 files
quotacheck: Old file not found.
quotacheck: Old file not found.
[root@localhost ~]# ll /data1/a*
-rw-------. 1 root root 6144 8月 2 09:48 /data1/aquota.group
-rw-------. 1 root root 6144 8月 2 09:48 /data1/aquota.user
[root@localhost ~]# quotaon -auvg
/dev/sdb3 [/data1]: group quotas turned on
/dev/sdb3 [/data1]: user quotas turned on
[root@localhost ~]# edquota -u crushlinux
Disk quotas for user crushlinux (uid 1001):
Filesystem blocks soft hard inodes soft hard
/ Dev / folders / myvg-mylv 0 0 0 0 0 0
/dev/sdb3 0 8000 10000 0 80 100
~
10 . Use touch dd command / data test directory
crushlinux@localhost home]$ dd if=/dev/zero of=/data1/ceshi bs=1M count=90
sdb3: warning, user block quota exceeded.
sdb3: write failed, user block limit reached.
dd: write "/ data1 / ceshi" Error : out of disk quota
Record the 10 + 0 read
Recorded 9 + 0 written
10240000 bytes (10 MB) have been copied, .177268 seconds, 57.8 MB / sec
[crushlinux@localhost home]$ touch /data1/{1..85}.txt
sdb3: warning, user file quota exceeded.
11 . View usage quotas: a user's perspective
[root@localhost home]# quota -uvs crushlinux
Disk quotas for user crushlinux (uid 1001):
Filesystem space quota limit grace files quota limit grace
/ Dev / folders / myvg-mylv
0K 0K 0K 0 0 0
/dev/sdb3 10000K* 8000K 10000K 6days 86* 80 100 6days
12 . View usage quotas: file system perspective
[root@localhost home]# repquota -auvs
*** Report for user quotas on device /dev/mapper/myvg-mylv
Block grace time: 7days; Inode grace time: 7days
Space limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
root -- 92160K 0K 0K 4 0 0
*** Status for user quotas on device /dev/mapper/myvg-mylv
Accounting: ON; Enforcement: ON
Inode: #67 (2 blocks, 2 extents)
*** Report for user quotas on device /dev/sdb3
Block grace time: 7days; Inode grace time: 7days
Space limits File limits
User used soft hard grace used soft hard grace
----------------------------------------------------------------------
root -- 20K 0K 0K 2 0 0
crushlinux ++ 10000K 8000K 10000K 6days 86 80 100 6days
Statistics:
Total blocks: 7
Data blocks: 1
Entries: 2
Used average: 2.000000