Ceph storage manual deployment

Original article produced by individuals, 51cto blog starting, please indicate the source.

This document describes the Ceph storage manual deployment process, described here Ceph no authentication methods deployed for internal environment to deploy and use Ceph.

I. Introduction deployment environment

1. Hardware environment:

ceph01: 10.10.10.11
ceph02: 10.10.10.12
ceph03: 10.10.10.13

2. Software Environment

Operating System: CentOS7.6
Ceph storage: jewel (10.2.10)

II. Basic environment ready

1. Install the operating system CentOS7.6

2. Configure the name of the operating system, network, hosts file, close Selinux, turn off the firewall

III. Installation Ceph

1. Installation source Ceph

# yum install centos-release-ceph-jewel

2. Install Ceph

# yum install ceph

IV. Initialization Ceph first node MON

1. Generate ceph cluster fsid

# uuidgen

2. Edit the configuration file ceph, file location and reads as follows:

# vim /etc/ceph/ceph.conf

[global]

auth cluster required = none
auth service required = none
auth client required = none

Broaching = a2efc66d-fd3f-464f-8098-c43d63b6f989

log file = /var/log/ceph/$name.log
pid file = /var/run/ceph/$name.pid

osd pool default size = 3
osd pool default min size = 2
osd pool default pg num = 128
osd pool default pgp num = 128

# public network = 10.10.20.0/24
# cluster network = 10.10.30.0/24

mon osd full ratio = .85
mon osd nearfull ratio = .70

[my]
my data = / ceph / mondata / my $ host

[mon.ceph01]
host = ceph01
mon addr = 10.10.10.11:5000

[osd]
osd data = /data/osd$id
osd journal = /ceph/journal/osd$id/journal
osd journal size = 10240
osd mkfs type = xfs
osd mkfs options xfs = -f
osd mount options xfs = rw,noatime

Note:
(1) fsid you need to be replaced in the first step of generating the above mentioned id
(2) Ceph storage mon default service port is 6789, here modified to 5000
(3) and osd mon data storage directory also made changes It should be noted

3. Generate monitor map:

# monmaptool --create --add ceph01 10.10.10.11:5000 --fsid a2efc66d-fd3f-464f-8098-c43d63b6f989 /tmp/monmap

4. Create mon data storage directory:

# mkdir -p /ceph/mondata

5. Generate the first monitor daemon Relevant documents:

# ceph-mon --mkfs -i ceph01 --monmap /tmp/monmap

6. Modify ceph directory is owned by the ceph

# chown -R ceph.ceph /ceph

7. Start ceph storage mon daemon, and add the boot from the start:

# systemctl start ceph-mon@ceph01
# systemctl enable ceph-mon@ceph01

V. add Ceph Mon node

Ceph 1. Modify the configuration file, add a new Mon node information, and synchronize the configuration files to other nodes in the cluster Ceph

# Vim /etc/ceph/ceph.conf add the following:

[mon.ceph02]
host = ceph02
mon addr = 10.10.10.12:5000

2. Get ceph cluster monmap:

# ceph mon getmap -o /tmp/monmap

3. Create mon data storage directory:

# mkdir -p /ceph/mondata

4. Create a second monitor daemon Relevant documents:

# ceph-mon --mkfs -i ceph02 --monmap /tmp/monmap

5. Add a new node to the cluster Mon

# ceph mon add ceph02 10.10.10.12:5000

6. Modify ceph directory is owned by the ceph

# chown -R ceph.ceph /ceph

7. Start ceph storage mon daemon, and add the boot from the start:

# systemctl start ceph-mon@ceph02
# systemctl enable ceph-mon@ceph02

VI. Adding Ceph OSD node

OSD Ceph cluster node for storing data, each drive is recommended OSD enable a service, and each service including journal OSD data and object data, where data on these two different partitions on the same disk.

1. generating OSD ID

# ceph osd create

Stated: Under normal circumstances, the first osd service ID should be 0, the following example to introduce to add osd0.

2. Edit ceph configuration file, add a new Osd node information, and synchronize the configuration files to other nodes in the cluster Ceph

[osd.0]
host = ceph01
devs = /dev/sdb1

Note: Here is a journal data storage partition sdb1

3. Create a data storage-related osd directory:

# mkdir -p /var/lib/ceph/osd/ceph-0
# mkdir -p /ceph/journal/osd0
# mkdir -p /data/osd0

4. formatted disk partition for storing osd

# mkfs.xfs -f /dev/sdb1
# mkfs.xfs -f /dev/sdb2

Note: sdb1 for the journal data storage partitions, sdb2 object data storage partitions

5. osd mount disk partitions for storage and add boot automatically mount

# mount /dev/sdb1 /ceph/journal/osd0
# mount /dev/sdb2 /data/osd0
# echo '/dev/sdb1 /ceph/journal/osd0 xfs defaults 0 0' >> /etc/fstab
# echo '/dev/sdb2 /data/osd0 xfs defaults 0 0' >> /etc/fstab

6. osd generation service-related documents

# ceph-osd -i 0 --mkfs --mkkey
# ceph auth add osd.0 osd 'allow *' mon 'allow rwx' -i /data/osd0/keyring

7. Add to ceph cluster osd

# ceph osd crush add osd.0 1.0 root=default

Note: 1.0 osd service for storing weights ceph weight of storing data, the more disk space, the higher should be the weight corresponding weights, this value should be adjusted according to actual disk capacity.

8. Modify ceph directory is owned by the ceph

# chown -R ceph.ceph /ceph
# chown -R ceph.ceph /data

9. Start ceph storage osd daemon, and add the boot from the start:

# systemctl enable ceph-osd@0
# systemctl start ceph-osd@0

10. Repeat the above steps to add more storage nodes osd.

11. Review the Ceph storage status
# ceph -s

I am responsible levy day Chengdu Technology Co., Ltd. cloud computing related product development, Zhengtian Technology specializes in providing cloud computing IaaS and PaaS layers of products and services, the official website address for businesses and individuals: www.tjiyun.com Welcome. If you have any questions, please contact us, Micro Signal ztkj_tjiyun

Guess you like

Origin blog.51cto.com/14661205/2477440