ceph distributed storage building

Ceph found in recent years, the development of hot, there are many cases companies fall.

ceph is no centralized storage, it has no management node, the compute node points.
ceph file is fragmented, each data block is an object, the object is stored on different servers. When a node fails, automatic data migration, re-replicated copy. Can dynamically add metadata servers and storage nodes, the capacity can be dynamically expanded.
ceph monitor node mon is divided into (at least one), the storage node OSD objects (at least two), mds metadata node.
osd copy process is responsible for data storage, data processing, recovery, backfill, rebalancing. Heartbeat and other osd daemon, to provide monitoring information to the mon. When you set osd you have two copies of the time, at least two osd daemon in order to achieve archive + clean state, by default there are three copies.
mon responsible for maintaining the cluster status of a variety of charts, including charts and monitor osd maps.
mds responsible for ceph file system for storing data when it is to use.
Stored procedure: file objects stored data classified as Object; corresponding objects are placed inside the pool (pool); pool by several pg composition; pg composition is composed of several osd; osd is composed of a hard disk.
Environmental
node1 172.16.1.201 data management node node
node2 172.16.1.202 data node

node3 172.16.1.203 data node

To join the above / etc / hosts file inside.

The experimental operating system for centos 7, ceph k version.

ceph deployment
1, turn off the firewall on the system and selinux

2, a copy of the secret key management node to other nodes implement the login without password

keygen -t-RSA SSH
SSH-Copy-Ceph-ID the root @ node1 .....
. 3, on each source node configuration required yum

[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-kraken/el7/$basearch
enabled=1
priority=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-kraken/el7/noarch
enabled=1
priority=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-kraken/el7/SRPMS
enabled=0
priority=1
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc
4、管理节点安装ceph-deploy工具

[root @ node1 yum.repos.d] # yum -y install Ceph-Deploy
5, the management node create a working directory

[the root @ node1 ~] # mkdir -p / etc / Ceph
. 6, creating two Mon
[the root @ node1 ~] # CD / etc / Ceph
[Ceph the root @ node1] # LS
[Ceph the root @ node1] # Ceph-Deploy new node1 node2 ## this meant creating a mon were on node1 and node2,
[root @ node1 Ceph] # LS
ceph.conf Ceph-Deploy-ceph.log ceph.mon.keyring
7, default is required a minimum of mon, 2 th osd

Join our business segments configure the public network = 172.16.22.0 / 24 in the configuration file ceph.conf, the results are as follows:

[Ceph the root @ node1] # Vim ceph.conf
[Global]
FSID = 2e6519d9-b733-446f-8a14-8622796f83ef
mon_initial_members = node1, node2
mon_host = 172.16.22.201,172.16.22.202
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public Network 172.16.22.0 = / 24
. 8, mounted ceph cluster
[ceph the root @ node1] ceph-# Deploy the install node1 node2
. 9, mon initialize and collect all keys

[root@node1 ceph]# ceph-deploy mon create-initial

[root@node1 ceph]# ls
ceph.bootstrap-mds.keyring ceph.bootstrap-osd.keyring ceph.client.admin.keyring ceph-deploy-ceph.log rbdmap
ceph.bootstrap-mgr.keyring ceph.bootstrap-rgw.keyring ceph.conf ceph.mon.keyring
10、查看创建的集群用户

[root@node1 ceph]# ceph auth list
installed auth entries:
client.admin
key: AQBavtVb2irGJBAAsbJcna7p5fdAXykjVbxznA==
caps: [mds] allow *
caps: [mgr] allow *
caps: [mon] allow *
caps: [osd] allow *
client.bootstrap-mds
key: AQBavtVbbNgXOxAAkvfj6L49OUfFX5XWd651AQ==
caps: [mon] allow profile bootstrap-mds
client.bootstrap-mgr
key: AQBbvtVbwiRJExAAdsh1uG+nL8l3UECzLT4+vw==
caps: [mon] allow profile bootstrap-mgr
client.bootstrap-osd
key: AQBbvtVbUV0NJxAAJAQ/yBs0c37C7ShBahwzsw==
caps: [mon] allow profile bootstrap-osd
client.bootstrap-rgw
key: AQBbvtVb/h/pOhAAmwK9r8DeqlOpQHxz9F/9eA==
caps: [mon] allow profile bootstrap-rgw
mgr.node1
key: AQBXvtVbeW/zKBAAfntYBheS7AkCwimr77PqEQ==
caps: [mon] allow *
11、创建osd

There are two ways to create osd

a, bare disc using the system, as the storage space;

b, using the existing file system to a directory or partition as a storage space, the official recommended to use a separate hard disk or partition for the OSD and its log as a storage space

1), using zoning

[root @ node1 ceph] # ceph -deploy disk zap node1: / dev / sdb node2: / dev / sdb ## clear partition and disk contents by zap command
[root @ node1 ceph] # ceph -deploy osd prepare node1: / dev / SDB node2: / dev / SDB
[Ceph the root @ node1] # Ceph-OSD Deploy the activate node1: / dev / sdbnode2: / dev / SDB
2), using the directory

[Ceph the root @ node1] # ssh node1 "mkdir / Data / osd0; -R & lt chown Ceph: Ceph / Data / osd0"
[the root @ node1 Ceph] # ssh node2 "mkdir / Data / osd0; -R & lt chown Ceph: Ceph / Data / osd0
[the root @ node1 Ceph] # Ceph-Deploy OSD PREPARE node1: / Data / osd0 node2: / Data / osd0
[the root @ node1 Ceph] # Ceph-Deploy OSD the activate node1: / Data / osd0 node2: / Data / osd0
12 is, using the hair ceph-deploy to all nodes in the configuration files and keys admin

[Ceph the root @ node1] # Ceph-node1 node2 Deploy ADMIN
13 is, for each node keyring permission to increase r

+ r /etc/ceph/ceph.client.admin.keyring chmod
14, check cluster health
[root @ node1 Ceph] # Ceph -s
Cluster 2e6519d9-b733-446f-8a14-8622796f83ef
Health HEALTH_WARN This warning can be ignored #
64 Degraded PGS
64 PGS Stuck unclean
64 PGS undersized
monmap E2: {node1. 1 Mons AT = 172.16.22.201: 6789/0}
Election Epoch. 4, 0 Quorum node1
MGR Active: node1
osdmap E9: 2 OSDS: 2 up, 2 in ## see this, if the number of
the flags sortbitwise, require_jewel_osds, require_kraken_osds
pgmap V21: 64 PGS, Pools. 1, Data bytes 0, 0 Objects
22761 Used MB, MB 15722 / Avail MB 38 484
64 Active + undersized + Degraded

[Ceph the root @ node1] # ceph health
64 PGS Degraded HEALTH_WARN; PGS Stuck unclean 64; 64 PGS undersized
[Ceph the root @ node1] # Ceph OSD Tree
ID the WEIGHT the TYPE NAME the UP / DOWN REWEIGHT the AFFINITY a PRIMARY-
-1 .03677 default the root
-2 Host node1 0.01839
0 0.01839 osd.0 up 1.00000 1.00000
-3 0.01839 Host node2
. 1 0.01839 1.00000 1.00000 osd.1 up
[the root @ node1 Ceph] DF # Ceph
the GLOBAL:
SIZE AVAIL the RAW the RAW USED USED%
38484M 15719M 22764M 59.15
POOLS:
NAME ID USED USED MAX AVAIL the OBJECTS%
RBD 0 0 0 4570M 0
so that we build over ceph two-node cluster, in which a mon, 2 Ge osd.
Here we introduce and expand mon osd of
expansion ceph cluster
add a node3 osd
1, the following, most commands are executed on node1 (front we let node1 serves as a management node)

# Switch to node1 the in / etc / ceph directory
[the root @ node1 ~] # CD / etc / ceph /
# mounted ceph-deploy software to the node3, but the following commands to be executed on node1
[the root @ node1 Ceph] # Ceph the install node3 -deploy
# this command executed on node3
[@ node3 the root Ceph] # mkdir / Data / osd0 -p
[@ node3 the root Ceph] # chown -R & lt Ceph: Ceph / Data / osd0
# osd to create node3
[the root @ node1 Ceph] # Ceph-Deploy the OSD PREPARE node3: / the Data / osd0
[root @ node1 ceph] # Ceph-Deploy the OSD of an activate node3: / the Data / osd0
# View node3 has been added to the cluster inside the
[root @ node1 ceph] # -s Ceph
Cluster 2e6519d9-b733-446f-8a14-8622796f83ef
Health HEALTH_OK
monmap E2: {node1. 1 Mons AT = 172.16.22.201: 6789/0}
Election Epoch. 4, 0 Quorum node1
MGR Active: node1
osdmap E14: OSDS. 3:. 3 up, 3 in
flags sortbitwise,require_jewel_osds,require_kraken_osds
pgmap v7372: 64 pgs, 1 pools, 0 bytes data, 0 objects
35875 MB used, 21850 MB / 57726 MB avail
64 active+clean
[root@node1 ceph]#
[root@node1 ceph]#
[root@node1 ceph]# ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.05516 root default
-2 0.01839 host node1
0 0.01839 osd.0 up 1.00000 1.00000
-3 0.01839 host node2
1 0.01839 osd.1 up 1.00000 1.00000
-4 0.01839 host node3
2 0.01839 osd.2 up 1.00000 1.00000


Add a new mon in node3 above
executed on node1

[root@node1 ceph]# ceph-deploy mon add node3
[root@node1 ceph]# ceph -s
cluster 2e6519d9-b733-446f-8a14-8622796f83ef
health HEALTH_OK
monmap e4: 3 mons at {node1=172.16.22.201:6789/0,node2=172.16.22.202:6789/0,node3=172.16.22.203:6789/0}
election epoch 8, quorum 0,1,2 node1,node2,node3
mgr active: node1 standbys: node3, node2
osdmap e14: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds,require_kraken_osds
pgmap v7517: 64 pgs, 1 pools, 0 bytes data, 0 objects
35890 MB used, 21835 MB / 57726 MB avail
64 active+clean
[root@node1 ceph]# ceph mon stat
e4: 3 mons at {node1 = 172.16.22.201: 6789/0, node2 = 172.16.22.202: 6789/0, node3 = 172.16.22.203: 6789/0}, election epoch 10, quorum 0,1,2 node1, node2 , node3
increase the role of metadata in node1
metadata is useful only if the file system, and we are using a piece of equipment, it is useless

[root@node1 ceph]#ceph-deploy mds create node1

Guess you like

Origin www.cnblogs.com/davidchen211/p/11720510.html