Ceph distributed storage series (5): How to limit the size of the pool pool

The pool cannot specify the size when creating it like the volume, but it provides the function of limiting the quota.

Two restrictions:

  • Limit the number of objects in the pool(max_objects)
  • Limit the storage data size within the pool(max_bytes)

To put it simply, it’s just a few commands.

查看池配额设置
$ ceph osd pool get-quota {
    
    pool_name}

限制池中存放object的个数
$ ceph osd pool set-quota {
    
    pool_name} max_objects {
    
    number}

限制池中最大存储的数据量大小
$ ceph osd pool set-quota {
    
    pool_name} max_bytes {
    
    number}

Let’s do a simple test below

1. Limit the number of objects in the pool

Create a pool with a pg number of 8-test

root@ceph-node1 ~]# ceph osd pool create test 8
pool 'test' created
[root@ceph-node1 ~]# ceph osd pool application enable test rbd
enabled application 'rbd' on pool 'test'

View cluster status and pool usage

[root@ceph-node1 mnt]# ceph -s
  cluster:
    id:     130b5ac0-938a-4fd2-ba6f-3d37e1a4e908
    health: HEALTH_OK
....
[root@ceph-node1 ~]# ceph df |grep POOLS -A 2
POOLS:
    POOL     ID     PGS     STORED     OBJECTS     USED     %USED     MAX AVAIL
    test      9       8        0 B           0      0 B         0       8.7 GiB

View the current pool's quota

[root@ceph-node1 ~]# ceph osd pool get-quota test
quotas for pool 'test':
  max objects: N/A
  max bytes  : N/A

Configure max_objects to limit the number of objects

[root@ceph-node1 ~]# ceph osd pool set-quota test max_objects 10
set-quota max_objects = 10 for pool test
[root@ceph-node1 ~]#
[root@ceph-node1 ~]# ceph osd pool get-quota test
quotas for pool 'test':
  max objects: 10 objects
  max bytes  : N/A

Create a 10M test file and manually pass the object into the test

[root@ceph-node1 mnt]# dd if=/dev/zero of=/mnt/file bs=10M count=1
1+0 records in
1+0 records out
10485760 bytes (10 MB) copied, 0.0452471 s, 232 MB/s

将文件导入pool池中(起object名字为object-1)
[root@ceph-node1 mnt]# rados put object-1 file -p test

查看pool池内的object
[root@ceph-node1 mnt]# rados ls -p test
object-1

A total of 10 objects are created

循环创建
[root@ceph-node1 mnt]# for i in {2..10}; do rados put object-$i file -p test; done

查看所有object
[root@ceph-node1 mnt]# rados ls -p test
object-4
object-10
object-3
object-5
object-7
object-1
object-2
object-8
object-6
object-9

The above means it can be created successfully. Please wait for a while and check the ceph status.

View ceph status and storage

[root@ceph-node1 mnt]# ceph -s
  cluster:
    id:     130b5ac0-938a-4fd2-ba6f-3d37e1a4e908
    health: HEALTH_WARN
            1 pool(s) full
....
[root@ceph-node1 mnt]# ceph df |grep POOLS -A 2
POOLS:
    POOL     ID     PGS     STORED      OBJECTS     USED        %USED     MAX AVAIL
    test      9       8     100 MiB          10     300 MiB      1.12       8.6 GiB

You can see that there is already a warning in the status information: 1 pool(s) full, indicating that a pool is full.

Tip:
STORED It represents the actual size of the stored data and
USEDthe total space used (because the pool created here is the default three copies, that is, three copies are copied, so 100x3 is 300M)

Let’s try adding new objects and deleting them again

[root@ceph-node1 mnt]# rados put object-11 file -p test
2021-01-20 17:05:28.388 7ff1b55399c0  0 client.170820.objecter  FULL, paused modify 0x55f2d92ae380 tid 0

甚至连删除都无法删除
[root@ceph-node1 mnt]# rados rm object-10 -p test
2021-01-20 17:05:40.149 7f43dac589c0  0 client.170835.objecter  FULL, paused modify 0x5624ef387bb0 tid 0

If you want to restore it, it's very simple. Just adjust the max_objects value to 0.

0 is the default value, which means no restrictions

[root@ceph-node1 mnt]# ceph osd pool set-quota test max_objects 0
set-quota max_objects = 0 for pool test
[root@ceph-node1 mnt]#
[root@ceph-node1 mnt]# ceph osd pool get-quota test
quotas for pool 'test':
  max objects: N/A
  max bytes  : N/A
[root@ceph-node1 mnt]# ceph -s
  cluster:
    id:     130b5ac0-938a-4fd2-ba6f-3d37e1a4e908
    health: HEALTH_OK

2. Limit the size of storage data in the pool

Experiment based on the above

Delete the object used for testing above

[root@ceph-node1 mnt]# for i in {1..10}; do rados rm object-$i -p test; done;
[root@ceph-node1 mnt]#
[root@ceph-node1 mnt]# rados ls -p test
[root@ceph-node1 mnt]#

Adjust max_bytes to limit the size of stored data

[root@ceph-node1 mnt]# ceph osd pool set-quota test max_bytes 100M
set-quota max_bytes = 104857600 for pool test
[root@ceph-node1 mnt]#
[root@ceph-node1 mnt]# ceph osd pool get-quota test
quotas for pool 'test':
  max objects: N/A
  max bytes  : 100 MiB

Create a 100M test file and import it into the pool

[root@ceph-node1 mnt]# dd if=/dev/zero of=/mnt/file_100 bs=100M count=1
1+0 records in
1+0 records out
104857600 bytes (105 MB) copied, 1.00625 s, 104 MB/s
[root@ceph-node1 mnt]#
[root@ceph-node1 mnt]# ll -h file_100
-rw-r--r--. 1 root root 100M Jan 20 17:50 file_100
[root@ceph-node1 mnt]#
[root@ceph-node1 mnt]# rados put object-1 file_100 -p test
[root@ceph-node1 mnt]#

After the command is executed, wait for a while and check the cluster status.

[root@ceph-node1 mnt]# ceph -s
  cluster:
    id:     130b5ac0-938a-4fd2-ba6f-3d37e1a4e908
    health: HEALTH_WARN
            1 pool(s) full
[root@ceph-node1 mnt]# ceph df
POOLS:
    POOL     ID     PGS     STORED      OBJECTS     USED        %USED     MAX AVAIL
    test      9       8     100 MiB           1     300 MiB      1.12       8.7 GiB

If any more data enters at this time, an error will be reported.

[root@ceph-node1 mnt]# rados put object-2 file_100 -p test
2021-01-20 17:54:12.740 7fa9704ce9c0  0 client.173479.objecter  FULL, paused modify 0x55e57f97d380 tid 0

To restore, just like max_object, set it to 0

[root@ceph-node1 mnt]# ceph osd pool set-quota test max_bytes 0
set-quota max_bytes = 0 for pool test
[root@ceph-node1 mnt]# ceph osd pool get-quota test
quotas for pool 'test':
  max objects: N/A
  max bytes  : N/A

If you want to configure it more precisely, you can also configure both parameters.

End……

Guess you like

Origin blog.csdn.net/weixin_43860781/article/details/112907361