The construction of a single physical machine ceph cluster

1. Add epel-release extension source

yum install --nogpgcheck -y epel-release

rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7

2. Add ceph source

vi /etc/yum.repos.d/ceph.repo

[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
 
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
 
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1

3. Install ceph preparation

Update the host library file: yum update -y

下载ceph-deploy:yum install ceph-deploy -y

Download and install the ntp service: yum install ntp ntpdate ntp-doc openssh-server yum-plugin-priorities -y

Modify the /etc/hosts file to add IP-hostname mapping, for example: 192.168.1.111 node1

Create a directory to place the ceph installed files and enter the directory: mkdir my-cluster ; cd my-cluster

Create a new cluster with ceph-deploy: ceph-deploy new node1 (hostname at the end)

Modify the ceph.conf configuration file and add the following

osd pool default size = 3 #Create 3 replicas
public_network = 192.168.1.0/24 #Public network
cluster_network = 192.1681.0/24 #Cluster network

4. ceph-deploy download and install the ceph program: ceph-deploy install node1

5. Divide three equal-sized partitions, and all are larger than 10GB: example fdisk /dev/sdb

ceph-deploy mon create-initial
ceph-deploy admin node1
chmod +r /etc/ceph/ceph.client.admin.keyring
ceph-disk prepare --cluster node1 --cluster-uuid f453a207-a05c-475b-971d-91ff6c1f6f48 - -fs-type xfs /dev/sdb1
The remaining two partition commands are the same, you only need to modify
the uuid above /dev/sdb* and use ceph -s to view it, which is the string of characters after the first line of cluster, which can be found in the configuration file. Modify
ceph-disk activate /dev/sdb1
ceph osd getcrushmap -o a.map
crushtool -d a.map -o a
vi a
rule replicated_ruleset {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step chooseleaf firstn 0 type osd #default is host, modified to osd
        step emit
crushtool -ca -o b.map
ceph osd setcrushmap -i b.map
ceph osd tree
ceph -s

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325475695&siteId=291194637