CEPH a deployment (CEPH environment preparation)
- Built environment preparation before CEPH
- First, the deployment of six respectively corresponding host ip and hostname
- Second, the local DNS parse arranged on a physical host
- Third, each of the remote server in advance save key scanning to the machine, does not need to answer Yes ssh (physical host operation)
- Attaining free dense landing (physical host operation)
- V. mount rhcs2.0-rhosp-20161113-x86_64.iso yum mirrored to the shared directory (in this case using HTTP) (physical host operation)
- Sixth, the physical host configuration yum source (physical host operation)
- Seven passes yum configuration on the physical host to another host 6 (physical host operation)
- Experimental topology
- CEPH node configuration
- First, the local DNS parse the configuration of each host node is node1
- Second, save the key for each host node node1
- Third, generates a public key and a node key is node1
- Fourth, implement remote node node1 individual hosts free dense landing (myself included node1) (node1 operation)
- 5, configuration time server (Node6 operation) is Node6 node (client)
- 6. In order to node1-5 configured as an NTP server (node6) Client
- Seven, each additive is node1-3 hard (physical host operation) of 3 Fast 10GB
Built environment preparation before CEPH
1. Prepare the physical host and the other six host deployment, configuration ip address, host name, turn off the firewall and SElinux (best new host or virtual machine)
2. Prepare 'rhcs2.0-rhosp-20161113-x86_64.iso' image file (search of Mother)
First, the deployment of six respectively corresponding host ip and hostname
IP addresses | CPU name |
---|---|
192.168.4.1 | node1 |
192.168.4.2 | node2 |
192.168.4.3 | node3 |
192.168.4.4 | node4 |
192.168.4.5 | node5 |
192.168.4.6 | node6 |
Second, the local DNS parse arranged on a physical host
Profiles:/etc/hosts
Quick Configuration # for loop (↓ used as a terminal)
for i in {1..6}
do
echo -e "192.168.4.$i\tnode$i.da.cn\tnode$i" >>/etc/hosts
#\t是tab键,echo命令需使用-e选项才能生效
done
Third, each of the remote server in advance save key scanning to the machine, does not need to answer Yes ssh (physical host operation)
Key to save the file:/root/.ssh/known_hosts
ssh-keyscan command # key scan command
ssh-keyscan node{1..6} > /root/.ssh/known_hosts
#node{1..6},相当于for循环执行
Attaining free dense landing (physical host operation)
# Use ssh keys for rapid transmission loop
for i in {1..6}
do
ssh-copy-id node$i
done
V. mount rhcs2.0-rhosp-20161113-x86_64.iso yum mirrored to the shared directory (in this case using HTTP) (physical host operation)
mkdir /var/www/html/rhcs
mount -a /root/rhcs2.0-rhosp-20161113-x86_64.iso /var/www/html/rhcs
#mount -a选项是开机自动挂载,记录在/etc/fstab文件里
Sixth, the physical host configuration yum source (physical host operation)
rhcs2.0-rhosp-20161113-x86_64.iso mirror comprises three warehouse source, respectively “mon”、“osd”、“tools”
vim /etc/yum.repos.d/da.repo
#配置文件开始
[rhel7]
name=rhel7
baseurl=http://192.168.4.254:83/rhel7 #自定义yum源分享目录路径
enabled=1
gpgcheck=0
[mon]
name=mon
baseurl=http://192.168.4.254:83/rhcs/mon #自定义yum源分享目录路径并查看步骤五创建的目录位置
enabled=1
gpgcheck=0
[osd]
name=osd
baseurl=http://192.168.4.254:83/rhcs/osd #自定义yum源分享目录路径并查看步骤五创建的目录位置
enabled=1
gpgcheck=0
[tools]
name=tools
baseurl=http://192.168.4.254:83/rhcs/tools #自定义yum源分享目录路径并查看步骤五创建的目录位置
enabled=1
gpgcheck=0
#配置文件结束,wq保存退出
Seven passes yum configuration on the physical host to another host 6 (physical host operation)
# Using a for loop
for i in {1..6}
do
scp /etc/yum.repos.d/da.repo node$i:/etc/yum.repos.d/
done
Experimental topology
Experimental substantially topology
CEPH node configuration
This case will be used as a management node node node1
First, the local DNS parse the configuration of each host node is node1
Profiles:/etc/hosts
Quick Configuration # for loop (↓ used as a terminal)
for i in {1..6}
do
echo -e "192.168.4.$i\tnode$i.da.cn\tnode$i" >>/etc/hosts
#\t是tab键,echo命令需使用-e选项才能生效
done
I. 1. The transfer node1 local DNS resolution files to the remaining five hosts
# Using a for loop
for i in node{2..6}
do
scp /etc/hosts $i:/etc/
done
Second, save the key for each host node node1
Key to save the file:/root/.ssh/known_hosts
ssh-keyscan command # key scan command
ssh-keyscan node{1..6} > /root/.ssh/known_hosts
#node{1..6},相当于for循环执行
Third, generates a public key and a node key is node1
ssh-keygen -f /root/.ssh/id_rsa -N ''
#非交互式生成密钥对
Fourth, implement remote node node1 individual hosts free dense landing (myself included node1) (node1 operation)
# Use ssh keys for rapid transmission loop
for i in node{1..6}
do
ssh-copy-id $i
done
5, configuration time server (Node6 operation) is Node6 node (client)
Five, 1-pack
yum -y install chrony
V. 2. modify the configuration file chrony
vim /etc/chrony.conf
server 0.centes.pool.ntp.org iburst
#server 1.centes.pool.ntp.org iburst
#server 2.centes.pool.ntp.org iburst
#server 3.centes.pool.ntp.org iburst
#只保留0,其余三个全部注释
#在下面添加以下两行配置:
allow 192.168.4.0/24 #允许192.168.4.0网段校验时间
local stratum 10 #10为自定义数值
#时间服务器的层级为10级
#保存退出
V. 3. Start Services
systemctl restart chronyd
6. In order to node1-5 configured as an NTP server (node6) Client
VI 1. Modify Profile
vim /etc/chrony.conf
#server 0.centes.pool.ntp.org iburst
#server 1.centes.pool.ntp.org iburst
#server 2.centes.pool.ntp.org iburst
#server 3.centes.pool.ntp.org iburst
#注释掉0-3
#在下面添加以下一行配置:
server 192.168.4.6 iburst #指定NTP服务器地址
#保存退出
VI 2. Start Services
systemctl restart chronyd
Sixth, 3. check time (with node6 synchronization time)
ntpdate 192.168.4.6
Seven, each additive is node1-3 hard (physical host operation) of 3 Fast 10GB
# Note: You can add graphics commands can also be added, in this case using a virtual machine, and use the command to add the hard way
In the case of a virtual machine can not be shut down, adding a hard disk directly
cd /var/lib/libvirt/images
qemu-img create -f qcow2 node1-vdb.vol 10G
qemu-img create -f qcow2 node1-vdc.vol 10G
qemu-img create -f qcow2 node1-vdd.vol 10G
qemu-img create -f qcow2 node2-vdb.vol 10G
qemu-img create -f qcow2 node2-vdc.vol 10G
qemu-img create -f qcow2 node2-vdd.vol 10G
qemu-img create -f qcow2 node3-vdb.vol 10G
qemu-img create -f qcow2 node3-vdc.vol 10G
qemu-img create -f qcow2 node3-vdd.vol 10G3