cephfs installation process

Pit mining all the way, all the way to being given the final installation is complete. Pit experience sharing at the end. First recording process can be successfully executed!

Cephs sequence of steps is: ceph cluster, Ceph installation, OSD, MDS (fs file system requires only, objects, and storing unneeded block storage), fs

Ready to work

1, the physical machine 2: CentOS7.6, kernel 3.1

2, based on the 10 version, Jewel

3, be sure the machine can be connected to external networks

Process

A preflight

1, the package source added depot. YUM create a library file with a text editor, its path is/etc/yum.repos.d/ceph.repo

sudo vim /etc/yum.repos.d/ceph.repo

2, the following paste into

[ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/$basearch
enabled=1
gpgcheck=1
priority=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc

[ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
priority=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=0
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1

3, update software and install librariesceph-deploy

yum update
yum install ceph-deploy

4, installation of NTP

yum install ntp ntpdate ntp-doc

Sure to start the NTP service on each node Ceph, and you want to use the same NTP server

5, install SSH Server

Ceph SSH server installed at each node (if not already):

yum install openssh-server

Ensure that all  the SSH server on the Ceph nodes are running

6 , create a user in all nodes ceph

useradd -d /home/ceph-node -m ceph-node
passwd ceph-node

Password to remember, ceph user's password

7, to ensure that users on each node of the newly created Ceph have  sudo  privileges.

echo "ceph-node ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph-node
chmod 0440 /etc/sudoers.d/ceph-node

8, to allow SSH login without password

Generate an SSH key pair, but do not use  sudo  or  root  user. When prompted to "Enter passphrase", directly enter the password that is empty:

[root@localhost ceph-node]# su ceph-node
[ceph-node@localhost ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ceph-node/.ssh/id_rsa): 
Created directory '/home/ceph-node/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/ceph-node/.ssh/id_rsa.
Your public key has been saved in /home/ceph-node/.ssh/id_rsa.pub.

Copied to a public key of each node Ceph

ssh-copy-id ceph-node@node1

(Recommended Practices) modify  ceph-deploy  on the managed node  ~ / .ssh / config  file, so  ceph-deploy  can be built with your username to log Ceph node, and each time without having to perform  ceph-deploy  must be designated  - -username  {username} . Doing so also simplifies  ssh  and  scp  usage. The  {username}  replace the user name you created.

Host node1
   Hostname node1
   User ceph-node
Host node2
   Hostname node2
   User ceph-node

9, during network boot

Ceph each OSD Monitors process and report their status through networking. If the network defaults to  OFF  , then Ceph cluster can not be on the line when you start until you turn on the network.

Some distributions (such as CentOS) network interface is disabled by default. So it is necessary to ensure that the card can be activated when the system starts, so Ceph daemon to communicate over a network. For example, on Red Hat and the CentOS, need to enter the  / etc / sysconfig / network-scripts  directory and ensure  ifcfg- {iface}  file  ONBOOT  settings become  yes 

10, open the required ports

Monitors default between the Ceph  6789  port communication

For RHEL on 7  firewalld  , open to public domain Ceph Monitors use the  6789  port and use the OSD  6800: 7300  port range, and you want to configure as a permanent rule, so after the restart rule is still valid. E.g:

sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent

SELINUX

Sudo setenforce 0

To make SELinux configuration permanent (if it is indeed the root of the problem), the need to modify its configuration file  / etc / SELinux / config  .

Second, create a cluster

ceph-deploy Will file output to the current directory, be sure to do so in this directory ceph-deploy

1. Create a cluster

mkdir my-cluster
cd my-cluster
ceph-deploy new node1 node2

With the current directory lsand catchecking ceph-deploythe output should have a profile Ceph, a monitor and a log file keyring

2, the Ceph configuration file from the default number of copies 3be changed 2so that only two OSD can also reach active + cleanthe state. To add the following line:

osd pool default size = 2

3, installation of Ceph

ceph-deploy install node1 node2

Or performed at each node yum install ceph

4, the initial configuration monitor (s), and collect all keys:

ceph-deploy mon create-initial

After the completion of the operation, the current directory should appear keyring:

  • {cluster-name}.client.admin.keyring

  • {cluster-name}.bootstrap-osd.keyring

  • {cluster-name}.bootstrap-mds.keyring

  • {cluster-name}.bootstrap-rgw.keyring

Note: bootstrap-rgw created only when installing Hammer key ring or later.

5, add two OSD

First wrote this, before sleep, Continued

 

Guess you like

Origin www.cnblogs.com/mabiao008/p/12037963.html