Teach you how to deploy a ceph cluster using rpm

Environmental preparation

1. Create an ordinary user on the node running the Ceph daemon, ceph-deploy will install the software package on the node, so the user you create needs no password sudo permission. It can be ignored if root is used.
To give the user all permissions, add the following to /etc/sudoers.d/ceph

echo "ceph ALL = (root) NOPASSWD:ALL" | tee /etc/sudoers.d/ceph
sudo chmod 0440 /etc/sudoers.d/ceph

2. Configure your management host so that it can access each node through SSH without password.
3. Configure the ceph source ceph.repo, here directly configure the source of 163 to speed up the installation

[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1

Deploy ceph related software and initialize the cluster

4. Installing ceph-deploy only needs to be on the management node

sudo yum update && sudo yum install ceph-deploy
安装ceph相关软件以及依赖包
sudo ceph-deploy install qd01-stop-cloud001 qd01-stop-cloud002 qd01-stop-cloud003

Note: You can use yum to directly install ceph related packages and
execute them on all nodes. If you perform this step, the above ceph-deploy install can be executed without

sudo yum -y install ceph ceph-common rbd-fuse ceph-release python-ceph-compat  python-rbd librbd1-devel ceph-radosgw

5. Create a cluster

sudo ceph-deploy new qd01-stop-k8s-node001 qd01-stop-k8s-node002 qd01-stop-k8s-node003

Modify the configuration file ceph.conf

[global]
fsid = ec7ee19a-f7c6-4ed0-9307-f48af473352c
mon_initial_members = qd01-stop-k8s-node001, qd01-stop-k8s-node002, qd01-stop-k8s-node003
mon_host = 10.26.22.105,10.26.22.80,10.26.22.85
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
cluster_network = 10.0.0.0/0
public_network = 10.0.0.0/0

filestore_xattr_use_omap = true
osd_pool_default_size = 3
osd_pool_default_min_size = 1
osd_pool_default_pg_num = 520
osd_pool_default_pgp_num = 520
osd_recovery_op_priority= 10
osd_client_op_priority = 100
osd_op_threads = 20
osd_recovery_max_active = 2
osd_max_backfills = 2
osd_scrub_load_threshold = 1

osd_deep_scrub_interval = 604800000
osd_deep_scrub_stride = 4096

[client]
rbd_cache = true
rbd_cache_size = 134217728
rbd_cache_max_dirty = 125829120

[mon]
mon_allow_pool_delete = true

Note: When adding Mon on a host, if it is not defined by the ceph-deploy new command, then the public network must be added to the ceph.conf configuration file.

6. Create and initialize mon

ceph-deploy mon create-initial

7. Create OSD, here only commands for one machine are listed, multiple OSDs replace the host name and execute repeatedly to
initialize the disk

ceph-deploy  disk zap qd01-stop-k8s-node001 /dev/sdb
ceph-deploy  disk zap qd01-stop-k8s-node001 /dev/sdc
ceph-deploy  disk zap qd01-stop-k8s-node001 /dev/sdd

Create and activate

ceph-deploy osd create   --data  /dev/sdb  qd01-stop-k8s-node001
ceph-deploy osd create   --data  /dev/sdc  qd01-stop-k8s-node001
ceph-deploy osd create   --data  /dev/sdd  qd01-stop-k8s-node001

8. Create a management host

ceph-deploy mgr create  qd01-stop-k8s-node001 qd01-stop-k8s-node002 qd01-stop-k8s-node003

Verify the status of the ceph cluster

9. View the cluster status

[root@qd01-stop-k8s-node001 ~]# ceph -s
  cluster:
    id:     ec7ee19a-f7c6-4ed0-9307-f48af473352c
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum qd01-stop-k8s-node002,qd01-stop-k8s-node003,qd01-stop-k8s-node001
    mgr: qd01-stop-k8s-node001(active), standbys: qd01-stop-k8s-node002, qd01-stop-k8s-node003
    osd: 24 osds: 24 up, 24 in

  data:
    pools:   1 pools, 256 pgs
    objects: 5  objects, 325 B
    usage:   24 GiB used, 44 TiB / 44 TiB avail
    pgs:     256 active+clean

10. Open mgr dashboard

ceph mgr module enable dashboard
ceph dashboard create-self-signed-cert
ceph dashboard set-login-credentials admin admin
ceph mgr services

ceph config-key put mgr/dashboard/server_addr 0.0.0.0 绑定IP
ceph config-key put mgr/dashboard/server_port 7000    设置端口
systemctl  restart [email protected]

11. Create a pool

创建
ceph osd pool create  k8s 256 256
允许rbd使用pool
ceph osd pool application enable k8s rbd --yes-i-really-mean-it
查看
ceph osd pool ls

12. Test

[root@qd01-stop-k8s-node001 ~]# rbd   create docker_test --size 4096 -p k8s
[root@qd01-stop-k8s-node001 ~]# rbd info docker_test -p k8s
rbd image 'docker_test':
        size 4 GiB in 1024 objects
        order 22 (4 MiB objects)
        id: 11ed6b8b4567
        block_name_prefix: rbd_data.11ed6b8b4567
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features:
        flags:
        create_timestamp: Wed Nov 11 17:19:38 2020

Uninstall

13. Delete the node and delete the ceph package

ceph-deploy purgedata  qd01-stop-k8s-node008
ceph-deploy purge qd01-stop-k8s-node008

Ceph cluster related operation commands

**健康检查**
ceph -s –conf /etc/ceph/ceph.conf –name client.admin –keyring /etc/ceph/ceph.client.admin.keyring
ceph health
ceph quorum_status –format json-pretty
ceph osd dump
ceph osd stat
ceph mon dump
ceph mon stat
ceph mds dump
ceph mds stat
ceph pg dump
ceph pg stat

**osd/pool**
ceph osd tree
ceph osd pool ls detail
ceph osd pool set rbd crush_ruleset 1
ceph osd pool create sata-pool 256 rule-sata
ceph osd pool create ssd-pool 256 rule-ssd
ceph osd pool set data min_size 2

**配置相关**
ceph daemon osd.0 config show (在osd节点上执行)
ceph daemon osd.0 config set mon_allow_pool_delete true(在osd节点执行,重启后失效)
ceph tell osd.0 config set mon_allow_pool_delete false (任意节点执行,重启后失效)
ceph config set osd.0 mon_allow_pool_delete true(只支持13.x版本,任意节点执行,重启有效,要求配置选项不在配置配置文件中,否则mon会忽略该设置)

**日志相关**
ceph log last 100
map
ceph osd map
ceph pg dump
ceph pg map x.yz
ceph pg x.yz query

**验证**
ceph auth get client.admin –name mon. –keyring /var/lib/ceph/mon/ceph-$hostname/keyring
ceph auth get osd.0
ceph auth get mon.
ceph auth ls

**crush相关**
ceph osd crush add-bucket root-sata root
ceph osd crush add-bucket ceph-1-sata host
ceph osd crush add-bucket ceph-2-sata host
ceph osd crush move ceph-1-sata root=root-sata
ceph osd crush move ceph-2-sata root=root-sata
ceph osd crush add osd.0 2 host=ceph-1-sata
ceph osd crush add osd.1 2 host=ceph-1-sata
ceph osd crush add osd.2 2 host=ceph-2-sata
ceph osd crush add osd.3 2 host=ceph-2-sata

ceph osd crush add-bucket root-ssd root
ceph osd crush add-bucket ceph-1-ssd host
ceph osd crush add-bucket ceph-2-ssd host

ceph osd getcrushmap -o /tmp/crush
crushtool -d /tmp/crush -o /tmp/crush.txt
update /tmp/crush.txt
crushtool -c /tmp/crush.txt -o /tmp/crush.bin
ceph osd setcrushmap -i /tmp/crush.bin

Guess you like

Origin blog.51cto.com/1648324/2550883