A. Role Cluster Suite
lights:
- Used to configure and manage the cluster, and then listens on port 8084.
ricci:
- Mounted on the rear end of each of the nodes, each node on the management cluster is luci and ricci communicate through the node, ricci listening on port 11111.
fence:
- When the cause of the accident so that the host exception, prepare the opportunity to first call the fence device will host reboot unusual or isolated from the network, when the fence operation successfully executed, the return information to the backup server, backup machine receiving the information fence successful after, began to take over host of services and resources, so that by the fence device, abnormal node will occupy the resources released, to ensure that resources and services are always running on a node, and effectively prevents the split brain occurs.
II. To build simulation RHCS cluster
1. Experimental environment to build
- For quick convenience version of this experiment used rhel6.5 encapsulate the master disc, the establishment of three virtual machines and snapshots, detailed configuration, see Bowen: the basic concept and its implementation varnish of the varnish.
Host (IP) | service |
---|---|
server1(172.25.254.1) | curly, light, httpd |
server2(172.25.254.2) | ricci,httpd |
foudation77(172.25.254.77) | fence |
2.rhcs environment to build
server1:
- (1) configure advanced yum source for installing software required to use when building environment
file editor reads as follows:
- (2) Download rhcs graphical Web-based management tool
ricci: the graphical interface of the cluster management software
luci: the graphical interface
-
(3) Installation service httpd
-
(4) linux rhcs control is achieved by the user ricci, ricci need to configure a user's password
-
(5) open the related services and set boot from the start
server2: -
(1) configure advanced yum source (same server1)
-
(2) Installation ricci
-
(3) Configure ricci user password
-
(4) Installation service httpd
-
(5)开启服务并设定开机自启动
3.集群节点server1和server2的添加 -
(1)访问luci图形化管理界面,并手动导入证书
-
(2)超级用户的登录
-
(3)将server1和server2集群
点击create cluster,进入等待页面,此时的server1和server2会重启,然后在物理机上重新连接。
注:如果没有设定luci和ricci开机自启动,则需要在等待过程中重启虚拟机之后,再次开起服务,才能完成server1和server2集群节点的添加。 -
(3)server1和server2上集群信息的查看
[root@server1 ~]# chkconfig --list ##查看开机会自启动的服务
[root@server1 ~]# cat /etc/cluster/cluster.conf ##发现该文件之中已有server1和server2的集群
[root@server1 ~]# clustat ##也会看见集群
三.FENCE的配置
1.物理机上配置fence
- (1)安装fence(yum源[rhel 7.3]搭建好之后)
yum search fence #查看fence的安装包
-
(2)生成fence的加密文件
因为本身配置的文件中没有fence的加密文件,所以要自己生成加密文件,然后再配置fence的配置文件
-
(3)fence的配置文件
-
(4)将生成的密匙文件发送给节点server1和server2,保证俩个节点使用同一个密匙
-
(5)开启fence服务
fence使用1229端口
2.fence设备的添加
- (1)在浏览器luci界面添加fence设备所要管理的节点
- (2)选择多模式的fence
- (3)绑定集群节点(server1和server2)
a.server1:
b.server2同server1的操作相同
配置后显示如下:
- (3)检测绑定是否成功
3.fence设备的测试
在server2上:
[root@server2 ~]# fence_node server1
通过fence干掉节点server2,时server2断电重新启动则为成功
四.高可用服务配置(httpd)
-
添加故障转移域
-
将server1和server2添加到故障转移域中,及当其中有一个出现故障时,服务落在优先级高的节点上。
注:数字越小优先级越高
-
添加webfail故障转移中需要的资源
-
添加VIP及集群对外的IP
-
再次点击resource,点击add添加httpd服务启动时需要的脚本
-
向集群中添加服务组在该服务组中添加上一步所添加的资源
-
创建一个资源组,服务中要用到的资源的集合
-
服务组中添加资源,点击新建的服务组名apache下方会出现添加资源,开始添加上一步的资源
-
在server1和server2上安装httpd服务并且编辑内容方便检测
-
server1:
-
server2:
-
测试
-
刷新网页显示在server1上运行,因为server1的优先级高
-
在srever1上查看可以看到VIP(虚拟服务器IP)
-
在物理机上访问俩台节点
-
在物理机上访问VIP
因为服务运行在server1上,server1的优先级高 -
测试高可用(HA)
-
在server1上,手动拓掉服务器
输入echo c > /proc/sysrp-trigger 命令后显示如下:
- 再次在物理机访问时显示server2服务器上的内容且VIP会自动漂到server2上
- Discovery Service again cut back on server1 server1 carried out after reboot, VIP will drift back again
later found on a physical machine to access the service back on server1. - Turn off httpd on server1 will find the service will automatically go on server2