Ceph deploy stand-alone environment, Linux platform

 

Ceph has been in full swing, many companies are using as their Ceph storage system. Daily studies are less likely to install a Ceph cluster, this article describes how to deploy a single-node system Ceph. In addition, it installed back-end storage engine is BlueStore, the engine performance is better, and mature engine.

Environment Description

  1. Single node : either VMWare or VirtualBox virtual machine
  2. Operating System : CentOS 7.4 or Ubuntu 18.04
  3. Deployment Tools : Use ceph-deploy deployment

Installing the software ceph-deploy

We are targeting the more popular two releases were introduced, that is, CentOS and Ubuntu systems. For other distributions This article does not describe temporarily.

CentOS 7.4

Login no password ready machine centos 7.4 system (may be a physical machine or virtual machine), and configure ssh's.
Note that this requires version 2.0 , do not use the system default version (the default version may be 1.x). Because the default version is relatively low, it may not BlueStore characteristics.

yum install https://download.ceph.com/rpm-luminous/el7/noarch/ceph-deploy-2.0.0-0.noarch.rpm

Ubuntu 18.04

For Ubuntu system to add installation source , if the default installation source ceph-deploy version is too low, the subsequent operation will be problems.

wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb https://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

The above command ceph-stable-releaseis to install Ceph version, when the specific installation needs replacing, wish to install such version is L into a string needs to be replaced luminous, if it is desired to install M Edition, need to be replaced to a character string mimic. The following is a specific example.

echo deb https://download.ceph.com/debian-luminous/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

After installation source is added, apt to update the cache, then you can install a ceph-deploy, using the following command:

sudo apt update
sudo apt -y install ceph-deploy

Installation package ceph

Creating a new cluster
can be installed through it ceph software after installing ceph-deploy the (subsequent installation and releases no relationship), you must first create a ceph cluster. Ceph also create a cluster by ceph-deploy command, which is essentially to create a configuration file and key file .

mkdir myceph
cd myceph
ceph-deploy new zhangsn

Because we are a single-node cluster ceph, so you need to cluster the number of copies is set to 1, so convenient. The specific method is to add this to ceph.conf inside.

[global]
osd pool default size = 1
osd pool default min size = 1

Ceph install the software
after completing modifying the configuration file, we can install Ceph packages. L version of the software is installed, for example, execute the following command.

ceph-deploy install --release luminous zhangsn

Mon initialization
status and configuration information for the entire cluster Ceph, etc. are called by a Monitor cluster management, so you first need to start the Monitor service. Execute the following command to complete:

ceph-deploy mon create-initial
ceph-deploy admin zhangsn

After starting Ceph Monitor node cluster can be visited by ceph -scan view the status of the cluster command.

FIG 1 Ceph cluster status

 

Deployment ceph mgr
assembly is a ceph mgr must be deployed, the following command can be deployed.

ceph-deploy  mgr create zhangsn

Deployment ceph osd
we know OSD data is stored in the node Ceph, in front of our deployed Monitor nodes, followed by the deployment of ceph osd. In order to facilitate our follow-up study of the structure and principles of BlueStore, we create two types of OSD, OSD 3 a is a logical volumes (to simulate different types of storage media), the other is only a logical volume OSD.
First, create a OSD 3 logical volumes, respectively, db, wal and data. Create volume groups and logical volumes respectively execute the following command.

$ pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.
$ vgcreate  ceph-pool /dev/sdb
  Volume group "ceph-pool" successfully created
$ lvcreate -n osd0.wal -L 1G ceph-pool
  Logical volume "osd0.wal" created.
$ lvcreate -n osd0.db -L 1G ceph-pool
  Logical volume "osd0.db" created.
$ lvcreate -n osd0 -l 100%FREE ceph-pool
  Logical volume "osd0" created.

When you finish creating the logical volume we can create the OSD.

ceph-deploy osd create \
    --data ceph-pool/osd0 \
    --block-db ceph-pool/osd0.db \
    --block-wal ceph-pool/osd0.wal \
    --bluestore zhangsn

If all goes well, after the above steps after the completion of the creation of a cluster of the smallest. It can be by ceph -scheck status.

FIG 2 Ceph cluster status

 

Next we create a run on the entire OSD bare disc, also created by ceph-deploy, using the following command:

ceph-deploy osd create --bluestore zhangsn --data /dev/sdc

So far, we have completed the entire installation process, you can look through the front of the command status of the cluster, we do not repeat them. BlueStore in the design can support multiple partitions, the main purpose is to use different storage media in the same OSD different content. For example, the metadata for higher-performance SSD.
Today, we deploy a basic environment, we will rely on the follow-up subsequent learning environment, such as BlueStore of principle and code analysis, and so on.

Guess you like

Origin blog.csdn.net/shuningzhang/article/details/90746650
Recommended