RHCS套件和nginx实现高可用集群

准备环境

server1作为主节点使用
server2作为副节点使用
物理机作为fence
server1:172.25.24.1

server2:172.25.24.2

VIP:172.25.24.100

在server1上装luci和ricci(server1即作为节点使用,也用来管理节点);在server2上安装ricci,
luci是一个用来管理节点的软件。

1:ssh互信

2:修改主机名为server1,server2 两个集群的主机名称必须不同,不然会在创建集群之后重启过程中起不来

3: ntp时间同步

下面开始本次实验安装:

1.两个作为节点的虚拟机都要配备高可用yum源,配置如下
[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=http://172.25.24.250/rhel6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[HighAvailability]
name=HighAvailability
baseurl=http://172.25.24.250/rhel6.5/HighAvailability
gpgcheck=0

[LoadBalancer]
name=LoadBalancer
baseurl=http://172.25.24.250/rhel6.5/LoadBalancer
gpgcheck=0

[ScalableFileSystem]
name=ScalableFileSystem
baseurl=http://172.25.24.250/rhel6.5/ScalableFileSystem
gpgcheck=0

[ResilientStorage]
name=ResilientStorage
baseurl=http://172.25.24.250/rhel6.5/ResilientStorage
gpgcheck=0
2.Server1(节点1):
[root@server1 ~]# yum install ricci luci -y
[root@server1 ~]# chkconfig luci on    ##开机自启
[root@server1 ~]# chkconfig ricci on   ##开机自启
[root@server1 ~]# passwd ricci     ##在两台节点上给ricci用户设置密码,可以与root的密码不同
[root@server1 ~]# /etc/init.d/ricci start ##打开ricci服务
[root@server1 ~]# /etc/init.d/luci start   ##打开luci节点管理服务服务
3.server2(节点2):
[root@server2 ~]# yum install ricci -y  ##下载ricci服务
[root@server2 ~]# chkconfig ricci on   ##设置开机自启
[root@server2 ~]# /etc/init.d/ricci start  ##打开ricci服务
[root@server2 ~]# passwd ricci        ##修改ricci用户密码

4在浏览器web登陆,luci端口为8084, https://172.25.24.1:8084

这里写图片描述
此登陆界面可以进行正常的用户登陆

登陆之后选择点击Manager Clusters ->Create创建集群,加入集群节点

这里写图片描述

点击 创建集群。点击 创建集群 后会有以下动作:

a. 如果您选择「下载软件包」,则会在节点中下载集群软件包。

b. 在节点中安装集群软件(或者确认安装了正确的软件包)。

c. 在集群的每个节点中更新并传推广群配置文件。

d. 加入该集群的添加的节点 显示的信息表示正在创建该集群。当集群准备好后,该显示会演示新创建集群的状态

现在点击Create Cluster,就会出现下面方式,等重新启动完成之后,节点添加完成

这里写图片描述
这里写图片描述
注:正常状态下,节点Nodes名称和Cluster Name均显示为绿色,如果出现异常,将显示为红色。

在任意虚拟机测试

这里写图片描述
节点添加成功

5.向集群中添加fnce

如果集群中一个节点通信失效,那么集群中的其他节点必须能够保证将已经失效的节点与其正在访问的共享资源(比如共享存储)隔离开,出问题的集群节点 本身无法做到这一点,因为该集群节点在此时可能已经失去响应(例如发生hung机),因此需要通过外部机制来实现这一点。这种方法被称为带有fence代 理的隔离。

不配置隔离设备,我们没有办法知道之前断开连接的集群节点使用的资源是否已经被释放掉。如果我们没有配置隔离代理(或者设备),系统可能错误的认为集群节点已经释放了它的资源,这将会造成数据损坏和丢失。 没有配置隔离设备,数据的完整性就不能够被保证,集群配置将不被支持。

当隔离动作正在进行中时,不允许执行其他集群操作。这包括故障转移服务和获取GFS文件系统或GFS2文件系统的新锁。 在隔离动作完成之前或在该集群节点已经重启并且重新加入集群之前,集群不能恢复正常运行。

隔离代理(或设备)是一个外部设备,这个设备可以被集群用于限制异常节点对共享存储的访问(或者硬重启此集群节点。

[root@xiaoqin Desktop]# yum install fence-virtd-multicast fence-virtd fence-virtd-libvirt -y   ##fence-virtd-multicast为实现广播同系机制,fence-virtd为模拟fence,fence-virtd-libvirt是将livrt转换为fence
##1.创建fence,其中没有特别输入的为默认的,输入的是要转换的
[root@xiaoqin Desktop]# fence_virtd -c  ##进行设置
#Module search path [/usr/lib64/fence-virt]: 

Available backends:
    libvirt 0.1
Available listeners:
    multicast 1.2

Listener modules are responsible for accepting requests
from fencing clients.

#Listener module [multicast]: 

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

#Multicast IP Address [225.0.0.12]: 

Using ipv4 as family.

#Multicast IP Port [1229]: 

Setting a preferred interface causes fence_virtd to listen only
on that interface.  Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to 'none' for no interface.

#Interface [virbr0]: br0

The key file is the shared key information which is used to
authenticate fencing requests.  The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.

#Key File [/etc/cluster/fence_xvm.key]: 

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

#Backend module [libvirt]: 

Configuration complete.
##会得到如下配置文件
=== Begin Configuration ===
backends {
    libvirt {
        uri = "qemu:///system";
    }

}

listeners {
    multicast {
        port = "1229";
        family = "ipv4";
        interface = "br0";
        address = "225.0.0.12";
        key_file = "/etc/cluster/fence_xvm.key";
    }

}

fence_virtd {
    module_path = "/usr/lib64/fence-virt";
    backend = "libvirt";
    listener = "multicast";
}

=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y


##创建钥匙所在目录,由于是第一次,此目录不会生成,需要手动创建
[root@xiaoqin ~]# mkdir /etc/cluster
[root@xiaoqin ~]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1
1+0 records in
1+0 records out
128 bytes (128 B) copied, 0.000123739 s, 1.0 MB/s
[root@xiaoqin ~]# cd /etc/cluster/
[root@xiaoqin cluster]# ls
fence_xvm.key
[root@xiaoqin cluster]# file fence_xvm.key 
fence_xvm.key: data
[root@xiaoqin cluster]# netstat -anulp | grep fence_virtd
[root@xiaoqin cluster]# systemctl start fence_virtd.service  ##启动fence服务
[root@xiaoqin cluster]# netstat -anulp | grep fence_virtd
udp        0      0 0.0.0.0:1229            0.0.0.0:*                           6910/fence_virtd    
注意:fence_xvm.key用于fence连接集群节点的认证,只有节点存在此密钥,才能创建成功

这里写图片描述
这里写图片描述

以上fence准备工作完成,可以开始在浏览器创建fence

这里写图片描述
这里写图片描述

在server1或server2查看fence是否添加成功:

[root@server1 ~]# cd /etc/cluster/
[root@server1 cluster]# cat cluster.conf 
<?xml version="1.0"?>
<cluster config_version="2" name="xiaoqin">
    <clusternodes>

        <clusternode name="server1" nodeid="1"/>
        <clusternode name="server2" nodeid="2"/>
    </clusternodes>
    <cman expected_votes="1" two_node="1"/>
    <fencedevices>
        <fencedevice agent="fence_xvm" name="westos1"/>  ##this is
    </fencedevices>
</cluster>
然后在在各个节点中添加,server1和server2操作相同

这里写图片描述
这里写图片描述
这里写图片描述
这里写图片描述
这里写图片描述
如果集群节点的名称和真实server的主机名称不对应该怎么办呢?本次实验碰巧是对应的。
虚拟机的名称是domainame,而集群是hostname,可以利用主机的UUID做映射,将集群节点的名称和相应的主机对应。

Server2操作与server1相同

再次查看信息

[root@server1 cluster]# cat cluster.conf 
<?xml version="1.0"?>
<cluster config_version="6" name="xiaoqin">  ##集群名称
    <clusternodes>
        <clusternode name="server1" nodeid="1">    ##server1为节点1
            <fence>
                <method name="fence1">   
                    <device domain="dc2c88a2-32f8-462d-bd6e-140801bb8d45" name="westos1"/>   ##可以清楚的区别
                </method>
            </fence>
        </clusternode>
        <clusternode name="server2" nodeid="2">
            <fence>
                <method name="fence2">
                    <device domain="dc2c88a2-32f8-462d-bd6e-140801bb8d45" name="westos1"/>
                </method>
            </fence>
        </clusternode>
    </clusternodes>
    <cman expected_votes="1" two_node="1"/>
    <fencedevices>
        <fencedevice agent="fence_xvm" name="westos1"/>
    </fencedevices>
</cluster>

创建Failover Domain

Failover Domain是配置集群的失败转移域,通过失败转移域可以将服务和资源的切换限制在指定的节点间,下面的操作将创建1个失败转移域,

点击Failover Domains–>Add

这里写图片描述
Prioritized:是否在Failover domain 中启用域成员优先级设置,这里选择启用。

Restrict:表示是否在失败转移域成员中启用服务故障切换限制。这里选择启用。

Not failback :表示在这个域中使用故障切回功能,也就是说,主节点故障时,备用节点会自动接管主节点服务和资源,当主节点恢复正常时,集群的服务和资源会从备用节点自动切换到主节点。

然后,在Member复选框中,选择加入此域的节点,这里选择的是node2和node4节点在“priority”处将node1的优先级设置为1,node2的优先级设置为2。

需要说明的是“priority”设置为1的节点,优先级是最高的,随着数值的降低,节点优先级也依次降低。

所有设置完成,点击Submit按钮,开始创建Failover domain。

这里写图片描述

这样集群节点和fence设备和故障转移域就创建完成了,下面开始加入Resources

以web服务为列

点击Resources–>Add 添加VIP(172.25.24.100)

这里写图片描述
Monitor Link 监控链接
Disable Updates to Static Routes 是否停止跟新静态路由
Number ofSeconds to Sleep After Removing an IP Address 超时时间

点击Resources–>Add 添加脚本

注意:nginx本身没有启动脚本的,所以要自己制作nginx脚本,可以从源码包里修改脚本
在RHCS中都是script类型的脚本在/etc/init.d/目录下
这里写图片描述

添加完VIP和脚本之后在此点击Resources

这里写图片描述

在这个时候需要给server1和server2添加脚本

root@server1 init.d]# vim nginx 
[root@server1 init.d]# /etc/init.d/nginx status
nginx is stopped
[root@server1 init.d]# cat nginx 
#!/bin/bash
# it is v.0.0.2 version.

# chkconfig: - 85 15

# description: Nginx is a high-performance web and proxy server.

#              It has a lot of features, but it's not for everyone.

# processname: nginx

# pidfile: /var/run/nginx.pid

# config: /usr/local/nginx/conf/nginx.conf

nginxd=/usr/local/nginx/sbin/nginx

nginx_config=/usr/local/nginx/conf/nginx.conf

nginx_pid=/var/run/nginx.pid

RETVAL=0

prog="nginx"

# Source function library.

. /etc/rc.d/init.d/functions

# Source networking configuration.

. /etc/sysconfig/network

# Check that networking is up.

[ ${NETWORKING} = "no" ] && exit 0

[ -x $nginxd ] || exit 0

# Start nginx daemons functions.

start() {

if [ -e $nginx_pid ];then

   echo "nginx already running...."

   exit 1

fi

   echo -n $"Starting $prog: "

   daemon $nginxd -c ${nginx_config}

   RETVAL=$?

   echo

   [ $RETVAL = 0 ] && touch /var/lock/subsys/nginx

   return $RETVAL

}

# Stop nginx daemons functions.

stop() {

        echo -n $"Stopping $prog: "

        killproc $nginxd

        RETVAL=$?

        echo

        [ $RETVAL = 0 ] && rm -f /var/lock/subsys/nginx /var/run/nginx.pid

}

# reload nginx service functions.

reload() {

    echo -n $"Reloading $prog: "

    #kill -HUP `cat ${nginx_pid}`

    killproc $nginxd -HUP

    RETVAL=$?

    echo

}

# See how we were called.

case "$1" in

start)

        start

        ;;

stop)

        stop

        ;;

reload)

        reload

        ;;

restart)

        stop

        start

        ;;

status)

        status $prog

        RETVAL=$?

        ;;



*)

        echo $"Usage: $prog {start|stop|restart|reload|status|help}"

        exit 1

esac

exit $RETVAL

选择Service Group添加组

OK,资源添加完毕,下面定义服务组(资源必须在组内才能运行)资源组内定义的资源为内部资源

这里写图片描述
Automatically Start This Service 自动运行这个服务
Run Exclusive 排查运行
Failover Domain 故障转移域(这里要选择你提前设置的那个故障转移域,也可以不选)
Recovery Policy 资源转移策略

创建完组之后,要把资源按顺序添加到组内,你添加资源的顺序就是

这里写图片描述
这里写图片描述
Maximum Number of Failures 最大错误次数
Failues Expire Time 错误超时时间
Maximum Number of Restars 最大重启时间
Restart Expire Time(seconds) 重启时间间隔

这里写图片描述

测试高可用集群是否成功:

[root@server1 html]# pwd
/usr/local/nginx/html
[root@server1 html]# cat westos.html 
<h1>server1</h1>
[root@server1 html]# 


root@server2 html]# pwd 
/usr/local/nginx/html
[root@server2 html]# cat westos.html 
<h1>server2</h1>
[root@server2 html]# 

这里写图片描述

[root@server1 html]# /etc/init.d/nginx stop
Stopping nginx:                                            [  OK  ]
[root@server1 html]# 

关闭脚之后刷新
这里写图片描述

这里写图片描述

高可用集群的拓展

1.高可用集群的负载均衡

真机域名解析:
172.25.24.100 www.westos.org
Server1 和server2作为节点 都要设置 ,server3 和server4 是后端

root@server1 conf]# vim nginx.conf

这里写图片描述

http {
    upstream westos{
    server 172.25.24.3:80;  在http下便添加下列
    server 172.25.24.4:80;
    }

重新启动任务

root@foundation24 mnt]# curl www.westos.org
www.westos.org  --server4
[root@foundation24 mnt]# curl www.westos.org
www.westos.org      -server3
[root@foundation24 mnt]# curl www.westos.org
www.westos.org  --server4
[root@foundation24 mnt]# curl www.westos.org
www.westos.org      -server3
[root@foundation24 mnt]# curl www.westos.org
www.westos.org  --server4
[root@foundation24 mnt]# curl www.westos.org
www.westos.org      -server3
[root@foundation24 mnt]# 

VIP存储

这里写图片描述
这里写图片描述
这里写图片描述
这里写图片描述
这里写图片描述
这里写图片描述

Server1 和serve2构成一个集群,这时候表现除集群的,实现存储同步
再添加一个server3,并且给server3添加一个虚拟磁盘
Server3作为真实机器用
root@server3 html]# fdisk -l
Disk /dev/vdb: 8589 MB, 8589934592 bytes  ##这就是添加的磁盘
[root@server3 ~]# yum install scsi-* -y    ##下在

root@server3 ~]# cd /etc/tgt
[root@server3 tgt]# vim targets.conf

这里写图片描述

<target iqn.2018-08.com.example:server.target1>
        backing-store /dev/vdb
        initiator-address 172.25.24.1
[root@server3 tgt]# /etc/init.d/tgtd start
        initiator-address 172.25.24.2
</target>
Server1和server都要做:
yum install -y iscsi-*
iscsiadm -m discovery -t st -p 172.25.24.3
iscsiadm -m node -l
[root@server2 ~]# iscsiadm -m discovery -t st -p 172.25.24.3
Starting iscsid:                                           [  OK  ]
172.25.24.3:3260,1 iqn.2018-08.com.example:server.target1
[root@server2 ~]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2018-08.com.example:server.target1, portal: 172.25.24.3,3260] (multiple)
Login to [iface: default, target: iqn.2018-08.com.example:server.target1, portal: 172.25.24.3,3260] successful.

[root@server2 ~]# cat /proc/partitions 
major minor  #blocks  name

 252        0   20971520 vda   
 252        1     512000 vda1
 252        2   20458496 vda2
 253        0   19439616 dm-0
 253        1    1015808 dm-1
   8        0    8388608 sda   ##这就是添加的共享资源
下面这一部分是其中一个创建就会同步
[root@server1 ~]# pvcreate /dev/sda   ##创建一个PV
  Physical volume "/dev/sda" successfully created
[root@server1 ~]# vgcreate clustervg /dev/sda    ###创建一个VG
  Clustered volume group "clustervg" successfully created
[root@server1 ~]# lvcreate -L 4G -n demo clustervg   ##在划分一个4Gd
  Logical volume "demo" created
[root@server1 ~]# mkfs.ext4 /dev/clustervg/demo 
在server2刷新
[root@server2 ~]# pvs
  PV         VG       Fmt  Attr PSize  PFree
  /dev/sda            lvm2 a--   8.00g 8.00g
  /dev/vda2  VolGroup lvm2 a--  19.51g    0 

[root@server2 ~]# vgs
  VG        #PV #LV #SN Attr   VSize  VFree
  VolGroup    1   2   0 wz--n- 19.51g    0 
  clustervg   1   1   0 wz--nc  8.00g 4.00g
[root@server2 ~]# lvs
  LV      VG        Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  lv_root VolGroup  -wi-ao----  18.54g                                             
  lv_swap VolGroup  -wi-ao---- 992.00m                                             
  demo    clustervg -wi-a-----   4.00g   

然后在共同下载

[root@server2 ~]# yum install mysql mysql-server -y
[root@server1 ~]# yum install -y mysql mysql-server

然后先server2

[root@server2 ~]# ll -d /var/lib/mysql/
drwxr-xr-x 2 mysql mysql 4096 Aug  9  2013 /var/lib/mysql/
[root@server2 ~]# cd /var/lib/mysql/
[root@server2 mysql]# ls
[root@server2 mysql]# ll -d .
drwxr-xr-x 2 mysql mysql 4096 Aug  9  2013 .
[root@server2 mysql]# mount /dev/clustervg/demo /var/lib/mysql/  #将这个挂在在/var/lib/mysql目录下
[root@server2 mysql]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1228944  16933408   7% /
tmpfs                           510188   25656    484532   6% /dev/shm
/dev/vda1                       495844   33458    436786   8% /boot
/dev/mapper/clustervg-demo     4128448  139256   3779480   4% /var/lib/mysql
[root@server2 mysql]# cd
[root@server2 ~]# chown mysql.mysql /var/lib/mysql/  
[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1228944  16933408   7% /
tmpfs                           510188   25656    484532   6% /dev/shm
/dev/vda1                       495844   33458    436786   8% /boot
/dev/mapper/clustervg-demo     4128448  139256   3779480   4% /var/lib/mysql
[root@server2 ~]# /etc/init.d/mysqld stop
Stopping mysqld:                                           [  OK  ]
[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1228944  16933408   7% /
tmpfs                           510188   25656    484532   6% /dev/shm
/dev/vda1                       495844   33458    436786   8% /boot
/dev/mapper/clustervg-demo     4128448  139256   3779480   4% /var/lib/mysql
[root@server2 ~]# umount /var/lib/mysql/
[root@server2 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1228944  16933408   7% /
tmpfs                           510188   25656    484532   6% /dev/shm
/dev/vda1                       495844   33458    436786   8% /boot

Server1:

[root@server1 ~]# mount /dev/clustervg/demo /var/lib/mysql/
[root@server1 ~]# ll -d /var/lib/mysql/
drwxr-xr-x 3 mysql mysql 4096 Aug  2 15:46 /var/lib/mysql/
[root@server1 ~]# cd /var/lib/mysql/
[root@server1 mysql]# ls
lost+found
[root@server1 mysql]# /etc/init.d//mysqld start
Initializing MySQL database:  Installing MySQL system tables...
OK
Filling help tables...
OK

To start mysqld at boot time you have to copy
support-files/mysql.server to the right place for your system

PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER !
To do so, start the server, then issue the following commands:

/usr/bin/mysqladmin -u root password 'new-password'
/usr/bin/mysqladmin -u root -h server1 password 'new-password'

Alternatively you can run:
/usr/bin/mysql_secure_installation

which will also give you the option of removing the test
databases and anonymous user created by default.  This is
strongly recommended for production servers.

See the manual for more instructions.

You can start the MySQL daemon with:
cd /usr ; /usr/bin/mysqld_safe &

You can test the MySQL daemon with mysql-test-run.pl
cd /usr/mysql-test ; perl mysql-test-run.pl

Please report any problems with the /usr/bin/mysqlbug script!

                                                           [  OK  ]
Starting mysqld:                                           [  OK  ]
[root@server1 mysql]# /etc/init.d//mysqld stop
Stopping mysqld:                                           [  OK  ]
[root@server1 mysql]# ls   ###在重新启动mysql之后文件同步完成
ibdata1  ib_logfile0  ib_logfile1  lost+found  mysql  test
[root@server1 mysql]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1243916  16918436   7% /
tmpfs                           510188   25656    484532   6% /dev/shm
/dev/vda1                       495844   33458    436786   8% /boot
/dev/mapper/clustervg-demo     4128448  160832   3757904   5% /var/lib/mysql
[root@server1 mysql]# umount /var/lib/mysql/
umount: /var/lib/mysql: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))
[root@server1 mysql]# cd
[root@server1 ~]# umount /var/lib/mysql/
[root@server1 ~]# df
Filesystem                   1K-blocks    Used Available Use% Mounted on
/dev/mapper/VolGroup-lv_root  19134332 1243912  16918440   7% /
tmpfs                           510188   25656    484532   6% /dev/shm
/dev/vda1                       495844   33458    436786   8% /boot
[root@server1 ~]# clustat 
Cluster Status for xiaoqin @ Thu Aug  2 17:02:21 2018
Member Status: Quorate

 Member Name                            ID   Status
 ------ ----                            ---- ------
 server1                                    1 Online, Local, rgmanager
 server2                                    2 Online, rgmanager

 Service Name                  Owner (Last)                  State         
 ------- ----                  ----- ------                  -----         
 service:mysql                 server1                       started       
[root@server1 ~]# mysql
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.71 Source distribution

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> 
mysql> show databases;
+---------------------+
| Database            |
+---------------------+
| information_schema  |
| #mysql50#lost+found |
| mysql               |
| test                |
+---------------------+
4 rows in set (0.00 sec)

猜你喜欢

转载自blog.csdn.net/a939029674/article/details/81387110
今日推荐