Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph

Ceph software installation

pve-->Ceph-->"Install Ceph-nautilus"
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph

Then follow the step by step to do just fine.
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph
"Start installation"
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph
after Y Enter
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph

Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph
Ceph private network can be specifically set

Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph
Install Cehp Finish.
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph

Ceph disk configuration

pve--> Ceph --> OSD --> Create:OSD
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph

Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph
Disk choose to do the hard Raid0 unused.
DB Disk: I direct use default configuration, if you have SSL hard disk can plan it.
WAL Disk: I direct use default configuration, if you have SSL hard disk can plan it.

Click: Create
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph
a moment, refresh it works fine now.
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph

As described above, the other hard disk server pve all add to the mix. (VM tester as above)

Below shows all physical machines:
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph

All three machines installed after the osd created, as follows:
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph

Create a pool for server use

PVE -> Ceph -> Pools -> the Create
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph
pg_num this value is more crucial, if not appropriate, then, Ceph will alarm
the official advice:
if less than 5 OSD, set pg_num 128.
5-10 OSD, set pg_num 512.
10 to 50 OSD, set pg_num 4096. (Our 11 osd, choose this)

The latter is created, stored under pve will be more of a Test_pool
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph

Wait about a few seconds, Test_pool ready
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph

Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph
Normally you can install a virtual machine on Test_poll

Notes: Ceph can create multiple Pool, somewhat similar to the store Thin Pool.
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph

Ceph Disk Management

Disk offline

If you want the disk with a n-ceph win, as follows:
First stop ---> re-out (similar to the first osd service stops, then the disk removed)
after the osd.10 stop, the status is as follows:
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph

Then put osd.10 out, state the following:
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph

Disk on-line

First in ---> re-start (similar to the disk into them, and then start osd services)
to the osd.10 in, states as follows:
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph

Osd.10 start and then
Proxmox VE 6.1 from zero take you through the installation and configuration of --Ceph
after a few seconds, osd.10 return to normal state.

Command Line View ceph state

root@pve:~# ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 2.2 TiB 2.1 TiB 88 GiB 99 GiB 4.49
TOTAL 2.2 TiB 2.1 TiB 88 GiB 99 GiB 4.49

POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
dpool 4 27 GiB 28.39k 87 GiB 4.29 646 GiB
test01 6 0 B 0 0 B 0 646 GiB
cephfs_data 7 0 B 0 0 B 0 646 GiB
cephfs_metadata 8 17 KiB 22 1.5 MiB 0 646 GiB
Test_Pool 9 0 B 0 0 B 0 646 GiB
Test_pool1 10 0 B 0 0 B 0 646 GiB
root@pve:~# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 hdd 0.13280 1.00000 136 GiB 9.5 GiB 8.5 GiB 31 KiB 1024 MiB 127 GiB 6.98 1.55 189 up
1 hdd 0.13280 1.00000 136 GiB 8.5 GiB 7.5 GiB 39 KiB 1024 MiB 127 GiB 6.26 1.40 167 up
2 hdd 0.13280 1.00000 136 GiB 6.7 GiB 5.7 GiB 51 KiB 1024 MiB 129 GiB 4.90 1.09 158 up
3 HDD 0.13280 1.00000 136 GiB 8.5 GiB 7.5 GiB 31 KiB 1024 MiB 127 GiB 6:25 1:39 158 up
4 HDD 0.13280 1.00000 136 GiB 8.3 GiB 7.3 GiB 139 KiB 1024 MiB 128 GiB 6:08 1:35 156 up
5 HDD 0.13280 1.00000 136 GiB 7.1 GiB 6.1 GiB 23 KiB 1024 MiB 129 GiB 5.20 1.16 165 up
6 HDD 0.13280 1.00000 136 GiB 9.5 GiB 8.5 GiB 55 KiB 1024 MiB 126 GiB 7:01 1:56 174 up
7 HDD 0.13280 1.00000 136 GiB 8.4 GiB 7.4 GiB 135 KiB 1024 MiB 128 GiB 6:17 1:38 177 up
8 HDD 0.27150 1.00000 278 GiB 6.1 GiB 5.1 GiB 47 KiB 1024 MiB 272 GiB 2:20 0:49 153 up
9 HDD 0.27150 1.00000 278 GiB 9.0 GiB 8.0 GiB 167 KiB 1024 MiB 269 GiB 3.25 0.72 170 up
10 HDD 0.54489 1.00000 558 GiB 17 GiB 16 GiB 167 KiB 1024 MiB 541 MiB 3:08 0.69 349 up
TOTAL 2.2 TiB 99 Apr 88 Apr 11 Apr 891 KiB 2.1 TiB 4.49
NIM / Max Var: 0.49 / 1.56 STDDEV: 1.75
root pve @: ~ # ceph status
Cluster:
id: 973df567-8154-4bf3-9cbd-673210de1745
health: HEALTH_OK

services:
mon: 3 daemons, quorum pve,pve02,pve03 (age 3h)
mgr: pve(active, since 2h)
mds: cephfs:1 {0=pve=up:active} 2 up:standby
osd: 11 osds: 11 up (since 4m), 11 in (since 4m)

data:
pools: 6 pools, 672 pgs
objects: 28.41k objects, 109 GiB
usage: 99 GiB used, 2.1 TiB / 2.2 TiB avail
pgs: 672 active+clean

io:
client: 130 KiB/s rd, 95 KiB/s wr, 1 op/s rd, 8 op/s wr

    root@pve:~# ceph osd status

+----+-------+-------+-------+--------+---------+--------+---------+-----------+
| id | host | used | avail | wr ops | wr data | rd ops | rd data | state |
+----+-------+-------+-------+--------+---------+--------+---------+-----------+
| 0 | pve | 9715M | 126G | 1 | 4096 | 1 | 102k | exists,up |
| 1 | pve | 8717M | 127G | 0 | 0 | 0 | 0 | exists,up |
| 2 | pve | 6818M | 129G | 1 | 15.2k | 0 | 3 | exists,up |
| 3 | pve | 8707M | 127G | 1 | 48.0k | 0 | 0 | exists,up |
| 4 | pve02 | 8460M | 127G | 0 | 1638 | 0 | 56.4k | exists,up |
| 5 | pve02 | 7243M | 128G | 2 | 22.4k | 0 | 0 | exists,up |
| 6 | pve02 | 9769M | 126G | 0 | 1638 | 0 | 51.2k | exists,up |
| 7 | pve02 | 8596M | 127G | 1 | 5734 | 0 | 0 | exists,up |
| 8 | pve03 | 6267M | 271G | 0 | 1638 | 0 | 102k | exists,up |
| 9 | pve03 | 9250M | 268G | 0 | 6553 | 0 | 0 | exists,up |
| 10 | pve03 | 17.2G | 540G | 1 | 17.6k | 2 | 0 | exists,up |
+----+-------+-------+-------+--------+---------+--------+---------+-----------+
root@pve:~#

root@pve:~# ceph osd stat
11 osds: 11 up (since 10m), 11 in (since 11m); epoch: e3390
root@pve:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 2.15027 root default
-3 0.53119 host pve
0 hdd 0.13280 osd.0 up 1.00000 1.00000
1 hdd 0.13280 osd.1 up 1.00000 1.00000
2 hdd 0.13280 osd.2 up 1.00000 1.00000
3 hdd 0.13280 osd.3 up 1.00000 1.00000
-5 0.53119 host pve02
4 hdd 0.13280 osd.4 up 1.00000 1.00000
5 hdd 0.13280 osd.5 up 1.00000 1.00000
6 hdd 0.13280 osd.6 up 1.00000 1.00000
7 hdd 0.13280 osd.7 up 1.00000 1.00000
-7 1.08789 host pve03
8 hdd 0.27150 osd.8 up 1.00000 1.00000
9 hdd 0.27150 osd.9 up 1.00000 1.00000
10 hdd 0.54489 osd.10 up 1.00000 1.00000
-9 0 host pve04
root@pve:~#

Guess you like

Origin blog.51cto.com/5404628/2479393