zz``Ceph distributed storage

https://blog.csdn.net/richardlygo/article/details/82430218

A, Ceph Overview:

Overview: Ceph is based on the doctoral thesis Sage Weil's University of California, Santa Cruz campus of the design and development of a new generation of free software distributed file system design goal is good scalability (PB above level), high performance, high reliability . Ceph its name and UCSC (Ceph birthplace) about the mascot, the mascot is "Sammy", the color of a banana slug is cephalopod mollusk shells no. These multiple tentacles of cephalopods, is a highly parallel distributed file system metaphor. The design follows three principles: separation of data and metadata, dynamic distributed metadata management, unified and reliable distributed object storage mechanism.

Second, the basic architecture:

1.Ceph is a highly available, easy to manage, open source distributed storage system, can also provide a set of objects stored in the system, block storage and file storage services. RADOS Ceph core composed mainly of the storage system and a block storage interface, the interface and file system object store memory interface composed;

 

2. Storage Type:

Block Storage:
 

 


 

In a typical store the DAS, SAN storage block is provided, the cinder OpenStack storage, e.g. iscsi storage;

 

 

Object Storage:
object storage concepts emerge late, storage standardization organizations SINA as early as 2004 gives the definition, but the early and more in large scale systems, so it was not known to the public, nor related products has been tepid. Up to the concept of cloud computing and big data universal strong push, slowly into public view. Speaking in front of the block and file storage, basically still within the local area network using a proprietary, object storage and the advantage of the scene or the Internet is a public network, mainly to solve the huge amounts of data, massive concurrent access requirements. The main application is based adaptation object store Internet (of course, this condition also applies to cloud computing, most likely to migrate to cloud-based applications on the Internet, because the term did not appear before the cloud, they have the above), all basic mature public cloud provides object storage product, whether domestic or foreign;

 

 

 

 

This interface typically QEMU Driver Kernel Module manner or present, need to implement this interface is an interface for Linux or QEMU Block Device Block Driver provides an interface, such as Swift, S3 and Gluster, Sheepdog, AWS of EBS, Yun cloud hard Ali cloud systems and Banko, as well as Ceph of RBD (RBD memory is block-oriented interface Ceph);

 

 

File system storage:
 


As with conventional file system Ext4 is a type, but differs in that the distributed storage provides the ability to parallelization, such as the Ceph CephFS (CephFS Ceph oriented interface is stored in the file), but sometimes they will GlusterFS, HDFS which kinds of non-POSIX interface class file storage interface fall into this category. Of course NFS, NAS file system is part of memory;

 

Summary: comparison;
 


 

 

 

 

3.Ceph basic framework:

 

 

 

 

 

 

 

 

Third, Detailed architectural components:

RADOS: All other client interface and deployment foundation. Consists of the following components: 
  the OSD: Object StorageDevice, the entity providing the data storage resources;
  Monitor: maintenance of information throughout the Ceph cluster heartbeat each node to maintain the global status of the entire cluster;
  MDS: Ceph the Metadata Server, file system metadata service node. MDS also supports multiple machines in a distributed deployment for high availability systems.
Typical RADOS deployment architecture of a small number of monitors and a large number of Monitor OSD storage devices, it is possible to provide a storage stable based on the cluster hetero structure, scalable, high-performance dynamics in a single logical Object storage interface. 

 

Ceph client interface (Clients): Ceph architecture in addition to LIBRADOS above the underlying RADOS, RADOSGW, RBD and called Ceph Ceph FS unified client interface. In a nutshell is a multi-language programming interface development RADOSGW, RBD and Ceph FS according to LIBRADOS provided. So between them is a ladder-class transition relationship. 
1.RADOSGW: Ceph object storage gateway, a bottom layer is provided on librados RESTful interface to the client object storage interface. Ceph API currently supports two interfaces: 
S3.compatible: S3-compatible interface that provides an interface compatible with Amazon S3 most RESTfuI API API interface. 
Swift.compatible: providing OpenStack Swift API interface most compatible interface. Ceph object storage using the gateway daemon (radosgw), radosgw structure chart: 


2.RBD: a data block is a sequence of bytes (e.g., a 512-byte data block). Based on the most common dielectric block storage interfaces, such as a hard disk, floppy disk, magnetic tape or even a conventional 9-track manner to store data. Block device interface popularity of such devices has become of virtual block data storage system over the selected image Ceph mass. Ceph in a cluster, support block device Ceph thin provisioning, resize, and store data. Ceph block devices can take advantage of RADOS function, implemented as snapshots, replication, and data consistency. Ceph RADOS the block device (i.e., RBD) protocol by interacting with RADOS librbd kernel modules or libraries. . RBD structure as shown: 

3.Ceph FS: Ceph file system (CEPH FS) is a POSIX-compliant file system, using Ceph storage cluster to store its data. FIG CEPH FS structure is as follows: 

 

Extended understand address: https: //www.sohu.com/a/144775333_151779

 

Four, Ceph data stored procedure:

Ceph storage cluster received from the client files, each client will be cut into one or more objects, these objects are then grouped according to certain policies and then stored in the OSD node of the cluster, which stored procedure as shown in It shows: 


FIG., The distribution has been calculated object requires two stages:
1 mapping of objects to PG. PG (PlaccmentGroup) is a logical collection of objects. PG is a basic unit of data to the OSD node distribution system, in the same PG objects will be distributed to the same node OSD (OSD node of a plurality of backup master nodes OSD). PG is an object by the object ID number through Hash algorithm, combined with some other correction parameters obtained. 
2.PG mapped to corresponding OSD, RADOS system using the corresponding hash algorithm based on the current state of the system ID number and PG, PG will be distributed to the OSD respective cluster.

 

Five, Ceph advantages: 
1.Ceph core RADOS usually consists of a small amount of OSD process Monitor process is responsible for cluster management and is responsible for a large amount of data stored, distributed architecture without central node, the data block multiple copies storage. It has good scalability and high availability. 
2. Ceph distributed file system provides a variety of clients, comprising an object storage interface, and the interface block storage file system interface, having broad applicability, and the OSD data stored client device data directly interacts greatly improved data access performance. 
3.Ceph as a distributed file system that is capable of adding replication and fault tolerance while maintaining POSIX compatibility. From the end of March 2010, and can be found Ceph figure in the Linux kernel (version 2.6.34 from the start), as one of the alternative file system for Linux, Ceph.ko has been integrated into the Linux kernel. Ceph is not just a file system or an enterprise-class features of object storage environment.

 

Six Case: build Ceph distributed storage;

Case environment:

system

IP addresses

Host name (login user)

Carrying role

Centos 7.4 64Bit 1708

192.168.100.101

dlp(dhhy)

admin-node

Centos 7.4 64Bit 1708

192.168.100.102

node1(dhhy)

my-node

osd0-node

mds-node

Centos 7.4 64Bit 1708

192.168.100.103

node2 (dhhy)

my-node

osd1-node

Centos 7.4 64Bit 1708

192.168.100.104

ceph-client(root)

ceph-client

 

Case steps:

Configuring basic environment:
Configuring ntp time service;
respectively dlp node, node1, node2 node, install Ceph programs on client client node;
in dlp Node Manager node storage node, installation and registration service, node information;
configuration Ceph the mon monitoring process;
osd Ceph storage configuration process of;
verification ceph view cluster status information:
Configuring the Ceph metadata mds process;
configuration of client Ceph client;
test Ceph client storage;
error consolidation;
 

 

Configuring basic environment:
[root @ DLP ~] # useradd dhhy

[root@dlp ~]# yum -y remove epel-release

[root@dlp ~]# echo "dhhy" |passwd --stdin dhhy

[root@dlp ~]# cat <<END >>/etc/hosts

192.168.100.101 dlp

192.168.100.102 node1

192,168,100,103 node2

192.168.100.104 ceph-client

END

[root@dlp ~]# echo "dhhy ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/dhhy

[root@dlp ~]# chmod 0440 /etc/sudoers.d/dhhy

 

[root@node1~]# useradd dhhy

[root@node1~]# yum -y remove epel-release

[root@node1 ~]# echo "dhhy" |passwd --stdin dhhy

[root@node1 ~]# cat <<END >>/etc/hosts

192.168.100.101 dlp

192.168.100.102 node1

192,168,100,103 node2

192.168.100.104 ceph-client

END

[root@node1 ~]# echo "dhhy ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/dhhy

[root@node1 ~]# chmod 0440 /etc/sudoers.d/dhhy

 

[root@node2 ~]# useradd dhhy

[root@node2~]# yum -y remove epel-release

[root@node2 ~]# echo "dhhy" |passwd --stdin dhhy

[root@node2 ~]# cat <<END >>/etc/hosts

192.168.100.101 dlp

192.168.100.102 node1

192,168,100,103 node2

192.168.100.104 ceph-client

END

[root@node2 ~]# echo "dhhy ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/dhhy

[root@node2 ~]# chmod 0440 /etc/sudoers.d/dhhy

 

[root@ceph-client ~]# useradd dhhy

[root@ceph-client ~]# echo "dhhy" |passwd --stdin dhhy

[root@ceph-client ~]# cat <<END >>/etc/hosts

192.168.100.101 dlp

192.168.100.102 node1

192,168,100,103 node2

192.168.100.104 ceph-client

END

[root@ceph-client ~]# echo "dhhy ALL = (root) NOPASSWD:ALL" >> /etc/sudoers.d/dhhy

[root@ceph-client ~]# chmod 0440 /etc/sudoers.d/dhhy

 

Configuring ntp time service;
[DLP the root @ ~] # yum the install ntp the ntpdate -Y

[root@dlp ~]# sed -i '/^server/s/^/#/g' /etc/ntp.conf

[root@dlp ~]# sed -i '25aserver 127.127.1.0\nfudge 127.127.1.0 stratum 8' /etc/ntp.conf

[root@dlp ~]# systemctl start ntpd

[root@dlp ~]# systemctl enable ntpd

[root@dlp ~]# netstat -utpln

 

[root@node1 ~]# yum -y install ntpdate

[root@node1 ~]# /usr/sbin/ntpdate 192.168.100.101

[root@node1 ~]# echo "/usr/sbin/ntpdate 192.168.100.101" >>/etc/rc.local

[root@node1 ~]# chmod +x /etc/rc.local

 

[root@node2 ~]# yum -y install ntpdate

[root@node2 ~]# /usr/sbin/ntpdate 192.168.100.101

[root@node1 ~]# echo "/usr/sbin/ntpdate 192.168.100.101" >>/etc/rc.local

[root@node1 ~]# chmod +x /etc/rc.local

 

[root@ceph-client ~]# yum -y install ntpdate

[root@ceph-client ~]# /usr/sbin/ntpdate 192.168.100.101

[root@ceph-client ~]# echo "/usr/sbin/ntpdate 192.168.100.101" >>/etc/rc.local

[root@ceph-client ~]# chmod +x /etc/rc.local

 

Respectively dlp node, node1, node2 node, client installed on the client node the Ceph;
[the root @ dlp ~] # yum yum the install -Y-utils

[root@dlp ~]# yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/

[root@dlp ~]# yum -y install epel-release --nogpgcheck

[root@dlp ~]# cat <<END >>/etc/yum.repos.d/ceph.repo

[Ceph]

name=Ceph packages for \$basearch

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

 

[Ceph-noarch]

name=Ceph noarch packages

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

 

[ceph-source]

name=Ceph source packages

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

END

[Root @ dlp ~] # ls /etc/yum.repos.d/ #### must ensure that the official website of the default source, the source and Netease binding epel ceph source, can be installed;

bak                    CentOS-fasttrack.repo  ceph.repo

CentOS CentOS Base.repo Media.repo dl.fedoraproject.org_pub_epel_7_x86_64_.repo

CentOS CentOS CR.repo Sources.repo epel.repo

CentOS-Debuginfo.repo CentOS-Vault.repo EPEL-testing.repo
[root @ dlp ~] # su - dhhy

[Dhhy @ dlp ~] $ mkdir ceph-cluster ## to create a home directory ceph

[dhhy@dlp ~]$ cd ceph-cluster

[Dhhy @ dlp ceph-cluster] $ sudo yum -y install ceph-deploy ## mounted ceph management tools

[Dhhy @ dlp ceph-cluster] $ sudo yum -y install ceph --nogpgcheck ## mounted main ceph

 

[root@node1 ~]# yum -y install yum-utils

[root@ node1 ~]# yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/

[root@node1 ~]# yum -y install epel-release --nogpgcheck

[root@node1 ~]# cat <<END >>/etc/yum.repos.d/ceph.repo

[Ceph]

name=Ceph packages for \$basearch

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

 

[Ceph-noarch]

name=Ceph noarch packages

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

 

[ceph-source]

name=Ceph source packages

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

END

[Root @ node1 ~] # ls /etc/yum.repos.d/ #### must ensure that the official website of the default source, the source and Netease binding epel ceph source, can be installed;

bak                    CentOS-fasttrack.repo  ceph.repo

CentOS CentOS Base.repo Media.repo dl.fedoraproject.org_pub_epel_7_x86_64_.repo

CentOS CentOS CR.repo Sources.repo epel.repo

CentOS-Debuginfo.repo CentOS-Vault.repo EPEL-testing.repo
[root @ node1 ~] # su - dhhy

[dhhy@node1 ~]$ mkdir ceph-cluster

[dhhy@node1~]$ cd ceph-cluster

[dhhy@node1 ceph-cluster]$ sudo yum -y install ceph-deploy

[dhhy@node1 ceph-cluster]$ sudo yum -y install ceph --nogpgcheck

[dhhy@node1 ceph-cluster]$ sudo yum -y install deltarpm

 

[root@node2 ~]# yum -y install yum-utils

[root@ node1 ~]# yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/

[root@node2 ~]# yum -y install epel-release --nogpgcheck

[root@node2 ~]# cat <<END >>/etc/yum.repos.d/ceph.repo

[Ceph]

name=Ceph packages for \$basearch

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

 

[Ceph-noarch]

name=Ceph noarch packages

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

 

[ceph-source]

name=Ceph source packages

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

END

[Root @ node2 ~] # ls /etc/yum.repos.d/ #### must ensure that the official website of the default source, the source and Netease binding epel ceph source, can be installed;

bak                    CentOS-fasttrack.repo  ceph.repo

CentOS CentOS Base.repo Media.repo dl.fedoraproject.org_pub_epel_7_x86_64_.repo

CentOS CentOS CR.repo Sources.repo epel.repo

CentOS-Debuginfo.repo CentOS-Vault.repo EPEL-testing.repo
[node2 root @ ~] # su - dhhy

[dhhy@node2 ~]$ mkdir ceph-cluster

[dhhy@node2 ~]$ cd ceph-cluster

[dhhy@node2 ceph-cluster]$ sudo yum -y install ceph-deploy

[dhhy@node2 ceph-cluster]$ sudo yum -y install ceph --nogpgcheck

[dhhy@node2 ceph-cluster]$ sudo yum -y install deltarpm

 

[root@ceph-client ~]# yum -y install yum-utils

[root@ node1 ~]# yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/

[root@ceph-client ~]# yum -y install epel-release --nogpgcheck

[root@ceph-client ~]# cat <<END >>/etc/yum.repos.d/ceph.repo

[Ceph]

name=Ceph packages for \$basearch

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/\$basearch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

 

[Ceph-noarch]

name=Ceph noarch packages

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

 

[ceph-source]

name=Ceph source packages

baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS

enabled=1

gpgcheck=0

type=rpm-md

gpgkey=https://mirrors.163.com/ceph/keys/release.asc

priority=1

END

[Root @ ceph-client ~] # ls /etc/yum.repos.d/ #### must ensure that the official website of the default source, the source and Netease binding epel ceph source, can be installed;

bak                    CentOS-fasttrack.repo  ceph.repo

CentOS CentOS Base.repo Media.repo dl.fedoraproject.org_pub_epel_7_x86_64_.repo

CentOS CentOS CR.repo Sources.repo epel.repo

CentOS CentOS Debuginfo.repo Vault.repo epel-testing.repo

[root@ceph-client ~]# yum -y install yum-plugin-priorities

[root@ceph-client ~]# yum -y install ceph ceph-radosgw --nogpgcheck

 

In dlp storage node node node management, installation and registration service, node information;
[dhhy @ dlp ceph-Cluster] $ pwd The current directory must be ## ceph installation directory location

/home/dhhy/ceph-cluster

[Dhhy @ dlp ceph-cluster] $ ssh-keygen -t rsa ## mon master node needs to manage remote node, you need to create a key pair and copy the public key to the node mon

[dhhy@dlp ceph-cluster]$ ssh-copy-id dhhy@dlp

[dhhy@dlp ceph-cluster]$ ssh-copy-id dhhy@node1

[dhhy@dlp ceph-cluster]$ ssh-copy-id dhhy@node2

[dhhy@dlp ceph-cluster]$ ssh-copy-id root@ceph-client

[dhhy@dlp ceph-cluster]$ cat <<END >>/home/dhhy/.ssh/config

Host dlp

   Hostname dlp

   User dhhy

Host node1

   Hostname node1

   User dhhy

Host node2

   Hostname node2

   User dhhy

END

[dhhy@dlp ceph-cluster]$ chmod 644 /home/dhhy/.ssh/config

[Dhhy @ dlp ceph-cluster] $ ceph-deploy new node1 node2 ## initialize node

 

 

 

 

 

[dhhy@dlp ceph-cluster]$ cat <<END >>/home/dhhy/ceph-cluster/ceph.conf

osd pool default size = 2

END

[dhhy@dlp ceph-cluster]$ ceph-deploy install node1 node2 ##安装ceph

 

 

 

 

 

Ceph mon configuration of the monitoring process;
[dhhy @ DLP Ceph-Cluster] $ Ceph-Deploy Initial ##-initialize the Create mon mon node

 

 

 

 

 

 

 

 

 

Notes: node node configuration files in / etc / ceph / directory, it will automatically sync profile dlp management node;

 

Ceph storage configuration of osd;
configuration osd0 storage node node1:

[Dhhy @ dlp ceph-cluster] $ ssh dhhy @ node1 ## to create a directory to store data node position osd

[dhhy@node1 ~]$ sudo fdisk /dev/sdb

n p p w Enter Return Return

[dhhy@node1 ~]$ sudo partx -a /dev/sdb

[dhhy@node1 ~]$ sudo mkfs -t xfs /dev/sdb1

[dhhy@node1 ~]$ sudo mkdir /var/local/osd0

[dhhy@node1 ~]$ sudo vi /etc/fstab

/dev/sdb1 /var/local/osd0 xfs defaults 0 0

:wq

[dhhy@node1 ~]$ sudo mount -a

[dhhy@node1 ~]$ sudo chmod 777 /var/local/osd0

[dhhy@node1 ~]$ sudo chown ceph:ceph /var/local/osd0/

[dhhy@node1 ~]$ ls -ld /var/local/osd0/

[dhhy@node1 ~]$ df -hT

[dhhy@node1 ~]$ exit

 

A storage device configured osd1 node node2:

[dhhy@dlp ceph-cluster]$ ssh dhhy@node2

[dhhy@node2 ~]$ sudo fdisk /dev/sdb

n p p w Enter Return Return

[dhhy@node2 ~]$ sudo partx -a /dev/sdb

[dhhy@node2 ~]$ sudo mkfs -t xfs /dev/sdb1

[dhhy@node2 ~]$ sudo mkdir /var/local/osd1

[dhhy@node2 ~]$ sudo vi /etc/fstab

/dev/sdb1 /var/local/osd1 xfs defaults 0 0

:wq

[dhhy@node2 ~]$ sudo mount -a

[dhhy@node2 ~]$ sudo chmod 777 /var/local/osd1

[dhhy@node2 ~]$ sudo chown ceph:ceph /var/local/osd1/

[dhhy@node2~]$ ls -ld /var/local/osd1/

[dhhy@node2 ~]$ df -hT

[dhhy@node2 ~]$ exit

 

dlp management node node node registration:

[Dhhy @ dlp ceph-cluster] $ ceph-deploy osd prepare node1: / var / local / osd0 node2: / var / local / osd1 ## creates an initial node and osd store file location specified node

 

 

[dhhy@dlp ceph-cluster]$ chmod +r /home/dhhy/ceph-cluster/ceph.client.admin.keyring

[dhhy@dlp ceph-cluster]$ ceph-deploy osd activate node1:/var/local/osd0 node2:/var/local/osd1

## ods node activation

 

 

 

 

[Dhhy @ dlp ceph-cluster] $ ceph-deploy admin node1 node2 ## key files to copy key management node node

 

 

[dhhy@dlp ceph-cluster]$ sudo cp /home/dhhy/ceph-cluster/ceph.client.admin.keyring /etc/ceph/

[dhhy@dlp ceph-cluster]$ sudo cp /home/dhhy/ceph-cluster/ceph.conf /etc/ceph/

[dhhy@dlp ceph-cluster]$ ls /etc/ceph/

ceph.client.admin.keyring  ceph.conf  rbdmap

[Dhhy @ dlp ceph-cluster] $ ceph quorum_status --format json-pretty ## Ceph cluster View details

{

    "election_epoch": 4,

    "quorum": [

        0,

        1

    ],

    "quorum_names": [

        "node1",

        "node2"

    ],

    "quorum_leader_name": "node1",

    "monmap": {

        "epoch": 1,

        "fsid": "dc679c6e-29f5-4188-8b60-e9eada80d677",

        "modified": "2018-06-02 23:54:34.033254",

        "created": "2018-06-02 23:54:34.033254",

        "mons": [

            {

                "rank": 0,

                "name": "node1",

                "addr": "192.168.100.102:6789\/0"

            },

            {

                "rank": 1,

                "name": "node2",

                "addr": "192.168.100.103:6789\/0"

            }

        ]

    }

}

 

View ceph verify cluster status information:
[dhhy @ DLP ceph-Cluster] $ ceph Health

HEALTH_OK

[Dhhy @ dlp ceph-cluster] $ ceph -s ## View Ceph cluster status

    cluster 24fb6518-8539-4058-9c8e-d64e43b8f2e2

     health HEALTH_OK

     monmap e1: 2 mons at {node1=192.168.100.102:6789/0,node2=192.168.100.103:6789/0}

            election epoch 6, quorum 0,1 node1,node2

     osdmap e10: 2 osds: 2 up, 2 in

            flags sortbitwise,require_jewel_osds

      pgmap v20: 64 pgs, 1 pools, 0 bytes data, 0 objects

            10305 MB used, 30632 MB / 40938 MB avail ## has been used, the remaining total capacity

                  64 active+clean

[dhhy@dlp ceph-cluster]$ ceph osd tree

ID WEIGHT  TYPE NAME      UP/DOWN REWEIGHT PRIMARY-AFFINITY

-1 0.03897 root default                                     

-2 0.01949     host node1                                   

 0 0.01949         osd.0       up  1.00000          1.00000

-3 0.01949     host node2                                   

 1 0.01949         osd.1       up  1.00000          1.00000

 

[Dhhy @ dlp ceph-cluster] $ ssh dhhy @ node1 ## verification node node1 port monitor status, and its configuration file and disk usage  

[dhhy@node1 ~]$ df -hT |grep sdb1

/dev/sdb1                   xfs        20G  5.1G   15G   26% /var/local/osd0           

[Dhhy @ node1 ~] $ you -sh / var / local / osd0 /

5.1G /var/local/osd0/

[dhhy@node1 ~]$ ls /var/local/osd0/

activate.monmap  active  ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  systemd  type  whoami

[dhhy@node1 ~]$ ls /etc/ceph/

ceph.client.admin.keyring  ceph.conf  rbdmap  tmppVBe_2

[dhhy@node1 ~]$ cat /etc/ceph/ceph.conf

[global]

Broaching = 0fcdfa46-c8b7-43fc-8105-1733bce3bfeb

mon_initial_members = node1, node2

mon_host = 192.168.100.102,192.168.100.103

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

osd pool default size = 2

[dhhy@node1 ~]$ exit

 

[Dhhy @ dlp ceph-cluster] $ ssh dhhy @ node2 ## verification node node2 listening port status, and its configuration file and disk usage 

[dhhy@node2 ~]$ df -hT |grep sdb1

/dev/sdb1                   xfs        20G  5.1G   15G   26% /var/local/osd1

[Dhhy @ node2 ~] $ you -sh / var / local / osd1 /

5.1G /var/local/osd1/

[dhhy@node2 ~]$ ls /var/local/osd1/

activate.monmap  active  ceph_fsid  current  fsid  journal  keyring  magic  ready  store_version  superblock  systemd  type  whoami

[dhhy@node2 ~]$ ls /etc/ceph/

ceph.client.admin.keyring  ceph.conf  rbdmap  tmpmB_BTa

[dhhy@node2 ~]$ cat /etc/ceph/ceph.conf

[global]

Broaching = 0fcdfa46-c8b7-43fc-8105-1733bce3bfeb

mon_initial_members = node1, node2

mon_host = 192.168.100.102,192.168.100.103

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

osd pool default size = 2

[dhhy@node2 ~]$ exit

 

Ceph metadata configuration of mds process;
[dhhy @ DLP Ceph-Cluster] $ Ceph-Deploy mds the Create node1

[dhhy@dlp ceph-cluster]$ ssh dhhy@node1

[dhhy@node1 ~]$ netstat -utpln |grep 68

(No info could be read for "-p": geteuid()=1000 but you should be root.)

tcp        0      0 0.0.0.0:6800            0.0.0.0:*               LISTEN      -                   

tcp        0      0 0.0.0.0:6801            0.0.0.0:*               LISTEN      -                   

tcp        0      0 0.0.0.0:6802            0.0.0.0:*               LISTEN      -                   

tcp        0      0 0.0.0.0:6803            0.0.0.0:*               LISTEN      -                   

tcp        0      0 0.0.0.0:6804            0.0.0.0:*               LISTEN      -                   

tcp        0      0 192.168.100.102:6789    0.0.0.0:*               LISTEN      -

[dhhy@node1 ~]$ exit

 

Configuration of Client client Ceph;
[@ dhhy DLP Ceph-Cluster] $ Ceph Ceph-client-Deploy the install

[dhhy@dlp ceph-cluster]$ ceph-deploy admin ceph-client

[dhhy@dlp ceph-cluster]$ ssh root@ceph-client

[root@ceph-client ~]# chmod +r /etc/ceph/ceph.client.admin.keyring

[root@ceph-client ~]# exit

[Dhhy @ dlp ceph-cluster] $ ceph osd pool create cephfs_data 128 ## a data pool

pool 'cephfs_data' created

[Dhhy @ dlp ceph-cluster] $ ceph osd pool create cephfs_metadata 128 ## metadata storage pool

pool 'cephfs_metadata' created

[Dhhy @ dlp ceph-cluster] $ ceph fs new cephfs cephfs_data cephfs_metadata ## to create a file system

new fs with metadata pool 1 and data pool 2

[Dhhy @ dlp ceph-cluster] $ ceph fs ls ## view the file system

name: cephfs, metadata pool: cephfs_data, data pools: [cephfs_metadata ]

[dhhy@dlp ceph-cluster]$ ceph -s

    cluster 24fb6518-8539-4058-9c8e-d64e43b8f2e2

     health HEALTH_WARN

            clock skew detected on mon.node2

            too many PGs per OSD (320 > max 300)

            Monitor clock skew detected

     monmap e1: 2 mons at {node1=192.168.100.102:6789/0,node2=192.168.100.103:6789/0}

            election epoch 6, quorum 0,1 node1,node2

      fsmap e5: 1/1/1 up {0=node1=up:active}

     osdmap e17: 2 osds: 2 up, 2 in

            flags sortbitwise,require_jewel_osds

      pgmap v54: 320 pgs, 3 pools, 4678 bytes data, 24 objects

            10309 MB used, 30628 MB / 40938 MB avail

                 320 active+clean

 

Test Ceph client storage;
[@ dhhy DLP Ceph-Cluster] $ @ Ceph-Client the root SSH

[root@ceph-client ~]# mkdir /mnt/ceph

[root@ceph-client ~]# grep key /etc/ceph/ceph.client.admin.keyring |awk '{print $3}' >>/etc/ceph/admin.secret

[root@ceph-client ~]# cat /etc/ceph/admin.secret

AQCd/x9bsMqKFBAAZRNXpU5QstsPlfe1/FvPtQ==

[root@ceph-client ~]# mount -t ceph 192.168.100.102:6789:/  /mnt/ceph/ -o name=admin,secretfile=/etc/ceph/admin.secret

[root@ceph-client ~]# df -hT |grep ceph

192.168.100.102:6789:/      ceph       40G   11G   30G   26% /mnt/ceph

[root@ceph-client ~]# dd if=/dev/zero of=/mnt/ceph/1.file bs=1G count=1

Recording the reading of the 1 + 0

Recording the write 1 + 0

1073741824 byte (1.1 GB) has been copied, 14.2938 seconds, 75.1 MB / sec

[root@ceph-client ~]# ls /mnt/ceph/

1.file
[root@ceph-client ~]# df -hT |grep ceph

192.168.100.102:6789:/      ceph       40G   13G   28G   33% /mnt/ceph

 

[root@ceph-client ~]# mkdir /mnt/ceph1

[root@ceph-client ~]# mount -t ceph 192.168.100.103:6789:/  /mnt/ceph1/ -o name=admin,secretfile=/etc/ceph/admin.secret

[root@ceph-client ~]# df -hT |grep ceph

192.168.100.102:6789:/      ceph       40G   15G   26G   36% /mnt/ceph

192.168.100.103:6789:/      ceph       40G   15G   26G   36% /mnt/ceph1

[root@ceph-client ~]# ls /mnt/ceph1/

1.file  2.file

 

Finishing error:
1. Should a problem occur during configuration, create a new cluster or reinstall ceph, then the data needs to be ceph cluster are removed, the command is as follows;

[dhhy@dlp ceph-cluster]$ ceph-deploy purge node1 node2

[dhhy@dlp ceph-cluster]$ ceph-deploy purgedata node1 node2

[dhhy@dlp ceph-cluster]$ ceph-deploy forgetkeys && rm ceph.*

 

2.dlp node to node node and client installation ceph, yum will install a timeout, mostly due to network issues, you can execute it several times to install the command;

 

When 3.dlp node designated ceph-deploy command management node node configuration, the current directory must be / home / dhhy / ceph-cluster /, or will be able to find ceph.conf configuration file;

 

4.osd node / var / local / osd * / directory permissions must be stored in data entity 777, and the owner and group must Ceph;

 

The following problems in the management node installation ceph dlp

 

 

Solution:

1. Re yum mounted node1 or node2 epel-release of the package;

2. Should also can not be solved, download the package, use the following command to install it locally;

 

 

6. Should changes to the master configuration file /home/dhhy/ceph-cluster/ceph.conf dlp management node, then it needs to be synchronized to the master node a node configuration file, the command is as follows:

 

 

After the node node receives the configuration file, you need to restart the process:

 

 

 

7. In view ceph dlp management node cluster status, appear as follows, the reason is because the time inconsistency caused;

 

 

Workaround: ntpd time dlp service node restart, node node re-synchronization time, as shown below:

 

 


----------------
Disclaimer: This article is the original article CSDN bloggers "Richardlygo", and follow CC 4.0 BY-SA copyright agreement, reproduced, please attach the original source link and this statement. .
Original link: https: //blog.csdn.net/richardlygo/article/details/82430218


 

Guess you like

Origin www.cnblogs.com/xiaodoujiaohome/p/11496002.html