**VMware Centos7部署三节点ceph集群**

VMware Centos7部署三节点ceph集群

一、环境配置
【使用VMware安装CentOS7】请参考https://blog.csdn.net/hui_2016/article/details/68927487
【VMware上centos7克隆多个虚拟机】请参考https://www.cnblogs.com/Lynette/p/9470800.html
【CentOS 7将网卡名称eno16777736改为eth0】请参考https://www.linuxidc.com/Linux/2015-09/123396.htm
【Vmware为虚拟机添加硬盘】请参考https://jingyan.baidu.com/article/63f236284305310208ab3d85.html

因为后边要用到,所以需要给虚拟机添加硬盘(10GB以上),三个Centos7都需要。
添加后进行硬盘分区:
执行命令:fdisk /dev/sdb
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First cylinder (1-102, default 1): 1
Last cylinder, +cylinders or +size{K,M,G} (1-102, default 102): 102
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): FD
Changed system type of partition 1 to fd (Linux raid autodetect)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
【****上述需要输入数字时,按照实际default后面的数字进行操作就可以】

二、部署ceph步骤
注意:后面步骤中vim修改只读文件,保存退出命令:****:w !sudo tee %
在VMware中安装一个Centos7,然后克隆两个,共三个Centos7,即三个节点。
执行命令:ifconfig 查看每个Centos7的ip地址

我的三个节点分别是:
192.168.23.131 节点1 (mon, admin, ceph-deploy)
192.168.23.133 节点2 (osd)
192.168.23.134 节点3 (osd)
【1】分别在三个节点上都关闭selinux
执行命令:vi /etc/selinux/config
将SELINUX值设为disabled
执行命令:reboot
关闭防火墙,执行命令:sudo systemctl stop firewalld
【2】分别在三个节点上创建用户并赋予root权限
这里创建的用户为:cent 密码是cent

sudo useradd -d /home/cent -m cent 
sudo passwd cent  
密码cent
echo "cent ALL=(root)NOPASSWD:ALL" |sudo tee /etc/sudoers.d/cent 
sudo chmod 0440 /etc/sudoers.d/cent 
su cent (切换到cent用户,不能用root或sudo执行ceph-deploy命令)
【****后续的操作全部在cent下执行】
sudo visudo(修改其中Defaults requiretty为Defaults:cent !requiretty【在比较下面一点】  
sudo hostname node1 (其他的两个节点就是node2和node3了)        
sudo yum install ntp ntpdate ntp-doc 
sudo yum install openssh-server

【3】修改node1的hosts文件

vim /etc/hosts
添加:192.168.0.1 node1 192.168.0.2 node2 192.168.0.3 node3

【4】配置node1的yum源

cd /etc/yum.repos.d
vi ceph.repo
	写入以下内容 
	[Ceph] 
	name=Ceph packages for $basearch 
	baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/$basearch 
	enabled=1 
	gpgcheck=0 
	type=rpm-md 
	gpgkey=https://mirrors.163.com/ceph/keys/release.asc 
	priority=1
	[Ceph-noarch]
	name=Ceph noarch packages 
	baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch 
	enabled=1 
	gpgcheck=0 
	type=rpm-md 
	gpgkey=https://mirrors.163.com/ceph/keys/release.asc 
	priority=1 
	[ceph-source] 
	name=Ceph source packages 
	baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS 
	enabled=1 
	gpgcheck=0 
	type=rpm-md 
	gpgkey=https://mirrors.163.com/ceph/keys/release.asc 
	priority=1

【5】在node1安装ceph-deploy

sudo yum install yum-plugin-priorities 
sudo yum install ceph-deploy

【6】配置node1的ssh

ssh-keygen (一直回车,使用默认配置)
ssh-copy-id cent@node1  
ssh-copy-id cent@node2 
ssh-copy-id cent@node3
vim ~/.ssh/config (创建config文件并写入以下内容)
		Host node1
		Hostname node1 
		User cent 	
		Host node2 
		Hostname node2 
		User cent 
		Host node3 
		Hostname node3 
		User cent
cd ~/.ssh
sudo chmod 600 config (赋予config文件权限)

【7】在node1上创建集群

su cent
cd ~
mkdir ceph-cluster 
cd ceph-cluster 
ceph-deploy new node1 (成功后会有ceph.conf)
vim ceph.conf (在global段最后添加) 
	       osd pool default size = 2
在ceph-cluster目录下,执行:
		ceph-deploy install node1 node2 node3 
安装完成后在node1中执行:
		ceph-deploy mon create-initial

【8】添加并激活osd

ssh node2 
sudo chmod -R 777 /dev/sdb
sudo chmod -R 777 /dev/sdb1
exit
ssh node3 	
sudo chmod -R 777 /dev/sdb
sudo chmod -R 777 /dev/sdb1
exit
ceph-deploy osd prepare node2:/dev/sdb node3:/dev/sdb 
ceph-deploy osd activate node2:/dev/sdb1 node3:/dev/sdb1
【****如果这两句执行后有错误,请尝试命令:
		ceph-deploy --overwrite-conf osd prepare node2:/dev/sdb node3:/dev/sdb --zap-disk
		ceph-deploy --overwrite-conf osd activate node2:/dev/sdb1 node3:/dev/sdb1】
【****如果前面步骤没有关闭防火墙,这里会使osd激活不成功。如果遇到osd激活不成功,请试着关闭防火墙】
ceph-deploy admin node1 node2 node3
sudo chmod +r /etc/ceph/ceph.client.admin.keyring

【9】查看ceph集群状态

ceph -s

如果显示HEALTH_OK,则部署成功

猜你喜欢

转载自blog.csdn.net/liguihong123/article/details/85173783