RHCS Cluster Suite high availability cluster

Introduction 1. RHCS cluster basis

(1) What is the RHCS
RHCS is an abbreviation for Red Hat Cluster Suite, Red Hat Cluster Suite that is, RHCS is able to provide a high availability, high reliability, load balancing, storage and sharing economic inexpensive cluster tool set, it will cluster system in the three cluster architecture of integration, you can give web applications, databases and other applications to provide a safe, stable operating environment.
Rather, RHCS is a full-featured application clustering solution, which is accessed from the front-end applications to back-end data storage provides an effective cluster architecture for this solution provided by RHCS, not only front-end applications to ensure lasting, stable provision of services, but also to ensure the security of back-end data store.
RHCS cluster system provides a framework of three clusters, namely, high availability clustering, load-balanced cluster, storage cluster.

(2) provide three core functions RHCS

High availability cluster is a core function of RHCS . When the management component availability ** application fails, or the system hardware, the network fails, the application can be provided by RHCS automatic, fast switching from one node to another node, node failover functionality is transparent for the client so as to ensure the application of sustained, uninterrupted provide services, which is RHCS achieve high availability clustering functions.
RHCS is provided by LVS (Linux Virtual Server) cluster load balancing , and LVS is an open source, powerful IP-based load balancing technology, LVS by the load balancer and service access nodes, through load scheduling functions LVS, you can average client requests assigned to each service node, can also define a variety of load allocation policy, when a request comes in, to determine the cluster to which the request should be assigned service node according to the scheduling algorithm, and then, by the distribution to node responding to client requests, at the same time, LVS also provides a service node failover capability, that is, when a service node can not provide the service, LVS will automatically shield the failed node, then removed from the cluster node fails while make a smooth transition to the new node requests up to the other normal nodes; and when this failure node back to normal, LVS will automatically added to this node in the cluster. And this series switching operation, the user, are transparent, through failover capability to ensure uninterrupted service, and stable operation.
RHCS to provide storage clustering capabilities by GFS file system , GFS is the acronym for Global File System, which allows multiple services simultaneously read and write to a single shared file system storage clusters by sharing data on a shared file system, thereby eliminating the trouble synchronizing data between applications, GFS is a distributed file system, which through a lock management mechanism to coordinate and manage multiple service nodes read and write operations on the same file system.

The basic kit (3) RHCS of

Here Insert Picture Description
RHCS cluster is a collection of tools, mainly in the following several major components:
Cluster Manager Architecture
This is a basic kit RHCS cluster, a cluster of basic functions, so that each cluster nodes to work together, distributed clusters contain specific Manager (CMAN), membership management, lock management (DLM), configuration file management (CCS), a gate device (FENCE).
High Availability Services Manager
to provide service monitoring and service node failover capabilities when a service node fails, it will be transferred to another health service node.
Cluster configuration management tools
RHCS latest version by LUCI to configure and manage RHCS cluster, LUCI is a web-based cluster configuration, you can easily build a powerful cluster system by luci.
Virtual Server Linux
LVS is an open source load balancing software with LVS client's request can be specified according to each service node reasonable load distribution strategies and algorithms, dynamic, intelligent load balancing.
In addition to the above RHCS several core configuration may also be supplemented by some of the following functions RHCS cluster assembly.
Hat GFS Red (, Ltd. Free Join File System)
GFS Redhat company is the development of a cluster file system, the latest version is GFS2, GFS file system service allows multiple simultaneous read and write a disk partition, you can achieve centralized management of data by GFS, data synchronization and eliminates the trouble of copying, but can not exist in isolation, GFS, underlying component mounting support required RHCS of GFS.
The Logical Volume Manager the Cluster
the Cluster Logical Volume Manager, i.e. CLVM is, the LVM is extended, this extension allows the use of machines in the cluster to manage the shared storage LVM.
iSCSI
iSCSI is a protocol on the Internet, in particular data block transmission standard Ethernet, it is a new storage technique based on IP Storage theory, and may be derived RHCS shared memory allocated by ISCSI techniques.
Global Network Block Device
global network module, referred GNBD, GFS is a supplemental component for allocating and managing shared memory RHCS, GNBD divided into client and server, the server allows GNBD deriving a plurality of blocks or devices GNBD file, GNBD the client device or by the introduction of these blocks exported files, they can be present as a block device. Now that GNBD has stopped developing, so use GNBD less and less.

The basic building 2.RHCS cluster

(1) the required environment:
 using a virtual machine image installation Enterprise 6.5, and packaged as a master, generates two virtual-machine master disc, the following details:

主机名 		ip 				用途
server1 	172.25.33.1 	即是管理节点、也是集群节点(减少虚拟机的开启)
server2 	172.25.33.2 	集群节点

(1) server1 desired operation:

[root@server1 ~]# cd /etc/yum.repos.d/
[root@server1 yum.repos.d]# ls
rhel-source.repo
[root@server1 yum.repos.d]# vim rhel-source.repo 
[root@server1 yum.repos.d]# scp rhel-source.repo server2:/etc/yum.repos.d
[root@server1 yum.repos.d]#yum install ricci luci -y		
[root@server1 yum.repos.d]#id ricci ##可以看到系统生成了一个ricci的用户 
[root@server1 yum.repos.d]#passwd ricci ##给这个用户一个密码,在图形网页界面登陆时会用到,建议server1和server2使用相同密码
[root@server1 yum.repos.d]# /etc/init.d/ricci start ##开启服务 
[root@server1 yum.repos.d]#chkconfig ricci on ##设置服务开机自启动(因为在网页管理时会给你重启节点,不设置自启动会导致服务起不来)
 [root@server1 ~]# /etc/init.d/luci start ##开启管理服务并设置自启动 
 [root@server1 ~]# chkconfig luci on 
 [root@server1 ~]#chkconfig --list ##查看以脚本方式运行的软件的状态,不全为off即视为可以自启动

Here Insert Picture DescriptionHere Insert Picture DescriptionHere Insert Picture Description(2) server2 required:

[root@server2 ~]# yum install -y ricci
[root@server2 ~]#passwd ricci

Here Insert Picture Description(3) open the image management interface:
browser and enter into the high-availability https://172.25.33.1:8084 graphical management interface, after adding the certificate to root user login
Here Insert Picture Descriptionclick Create to create a cluster westos_ha
Here Insert Picture DescriptionHere Insert Picture Descriptionthe clustat View cluster status
Here Insert Picture Descriptioninitially set up RHCS cluster success!

Add 3.RHCS cluster Fence assembly

(1) Fence assembly Introduction:

fence管理节点,当一个节点出现故障以后,可以强制让它重启Fence技术”核心在于解决高可用集群在出现极端问题情况下的运行保障问题,在高可用集群的运行过程中,有时候会检测到某个节点功能不正常,比如在两台高可用服务器间的心跳线突然出现故障,这时一般高可用集群技术将由于链接故障而导致系统错判服务器宕机从而导致资源的抢夺,为解决这一问题就必须通过集群主动判断及检测发现问题并将其从集群中删除以保证集群的稳定运行,Fence技术的应用可以有效的实现这一功能
Fence设备可以防止集群资源(例如文件系统)同时被多个节点占有,保护了共享数据的安全性和一致性节
在RHCS中,集群里的服务器会互相争抢资源造成客户体验端的不稳定,也就是脑裂问题。利用Fence可以解决脑裂问题,相当于集群里的服务器可以关闭对方的电闸,防止集群之间互相争抢资源

(2)Fence组件的添加实现:需要在其他一台主机让做fence管理器,本次实验利用真机做fence管理器

<1> 步骤一:
打开浏览器,添加fence Devices,类型为fence virt(multicast mode),name为vmfence
Here Insert Picture DescriptionHere Insert Picture Description<2> 步骤二:fence管理器所作操作

yum install -y fence-virtd.x86_64 fence-virtd-multicast.x86_64 fence-virtd-libvirt.x86_64
fence_virtd -c    #编辑fence信息,设置端口必须为br0,其他的直接空格
mkdir /etc/cluster
cd /etc/cluster/
dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1
scp fence_xvm.key [email protected]:/etc/cluster/
scp fence_xvm.key [email protected]:/etc/cluster/
systemctl start fence_virtd.service 

Here Insert Picture DescriptionHere Insert Picture DescriptionHere Insert Picture DescriptionHere Insert Picture DescriptionHere Insert Picture DescriptionHere Insert Picture Description<3> 步骤三:
 在server上开启ricci和luci,server2上开启ricci,打开图形界面, 在集群westos_ha中,点击server1选择添加add fence method to node, mathod name:vmfence-1,然后点击添加fence instance:domain为该节点的uuid
 server2同样操作
Here Insert Picture DescriptionHere Insert Picture DescriptionHere Insert Picture DescriptionHere Insert Picture DescriptionHere Insert Picture DescriptionHere Insert Picture Description<4> 测试:

#在server1和server2上
 vim cluster.conf    应该添加了fence的内容

#在server1上
 fence_node server2   则server2应该重启

Here Insert Picture DescriptionHere Insert Picture DescriptionHere Insert Picture Description

Achieve 4.RHCS cluster failover (to httpd services, for example), add the resource and resource group

(1) defining failover domain:
Here Insert Picture DescriptionHere Insert Picture Description(2) add resources:
Here Insert Picture DescriptionHere Insert Picture DescriptionHere Insert Picture DescriptionHere Insert Picture Description(3) Add a Resource Group:
Here Insert Picture DescriptionHere Insert Picture DescriptionHere Insert Picture DescriptionHere Insert Picture DescriptionHere Insert Picture Description(4) provided server1 server, install the httpd but not opened, When the author in a graphical interface resource group automatically open the service, because server1 high priority, it will run on server1 httpd, and obtains VIP
Here Insert Picture DescriptionHere Insert Picture DescriptionHere Insert Picture Description(. 5) provided server2 server installation httpd. When stopped on server1 httpd service, the service will be transferred to the server2, and vip likewise transferred to the server2
Here Insert Picture DescriptionHere Insert Picture DescriptionWhen an application failure, hardware or system appears, the network fails, the application
canHigh availability service management component RHCS by providing automatic, fast switching from one node to another, Failover capability for the client node is a transparent, thus ensuring application of continuous, uninterrupted provide services, which is set RHCS high availability
cluster functions implemented.

Published 168 original articles · won praise 1 · views 2972

Guess you like

Origin blog.csdn.net/yrx420909/article/details/104444260
Recommended