linux enterprise actual combat-----LVS+pacemaker high-availability cluster construction

Environment setup

Since rhel8 is not yet popular in enterprises, we need to install rhel7.6 virtual machines to complete the project.

In this experiment we need three virtual machines:

Host home: 172.25.19.10

server1:172.25.19.1

server2:172.25.19.2

Virtual machine packaging

1. Host name modification

[root@localhost ~]$ hostnamectl set-hostname home
[root@localhost ~]$ hostname
home

2. Network file configuration

[root@localhost network-scripts]$ vim ifcfg-eth0
[root@localhost network-scripts]$ cat ifcfg-eth0
BOOTPROTO=none
DEVICE=eth0
ONBOOT=yes
IPADDR=172.25.19.10
PREFIX=24

3. Configure host resolution

[root@localhost network-scripts]$ vim /etc/hosts
[root@localhost network-scripts]$ cat /etc/hosts

Insert picture description here

4. Disable items

systemctl disable --now NetworkManager
systemctl disable --now firewalld
vim /etc/sysconfig/selinux 

Insert picture description here

5. Configure yum source

mount /iso/rhel-server-7.6-x86_64-dvd.iso /var/www/html/rhel7.6/
vim /etc/yum.repos.d/westos.repo

Insert picture description here

Implementation of high-availability cluster

1. Environment setup

Clone the encapsulated virtual machine twice, renamed server1, server2, and changed ip to 172.25.19.1 and 172.25.19.2 respectively
Insert picture description here
Insert picture description here

2. Update software warehouse configuration for high availability cluster

[root@server1 yum.repos.d]# cat westos.repo 
[rhel7]
name=rhel7
baseurl=http://172.25.19.250/rhel7.6/
gpgcheck=0

[addons]
name=HighAvailability
baseurl=http://172.25.19.250/rhel7.6/addons/HighAvailability
gpgcheck=0

3. Secret-free connection

在server1中:
 
ssh-keygen
 
ssh-copy-id server2

Insert picture description here

4. High availability concrete realization

在server1中:
 
yum install -y pacemaker pcs psmisc policycoreutils-python   安装集群插件
ssh server2 yum install pacemaker corosync -y
ssh server2 yum install -y pacemaker pcs psmisc policycoreutils-python
systemctl enable --now pcsd
systemctl start pcsd.service 
systemctl status pcsd.service 
ssh server2 systemctl enable --now pcsd
ssh server2 systemctl start pcsd.service 
echo westos | passwd --stdin hacluster
ssh server2 echo westos | passwd --stdin hacluster
yum install bash-* -y
pcs cluster auth server1 server2                              配置corosync
 
#在同一个节点上使用pc集群设置来生成和同步corosync
pcs cluster setup --name mycluster server1 server2
pcs cluster start --all                                       开启集群
corosync-cfgtool -s                                           检查群通信是否通畅
pcs cluster status

Insert picture description here
Insert picture description here

Insert picture description here
Insert picture description here

5. Add resources to the highly available cluster

[root@server1 yum.repos.d]# pcs property set stonith-enabled=false
[root@server1 yum.repos.d]# crm_verify -LV

pcs resource create vip ocf:heartbeat:IPaddr2 ip=172.25.254.100 op monitor interval=30s
pcs status
yum install httpd
ssh server2 yum install httpd -y
systemctl enble --now httpd
systemctl enable --now httpd
systemctl restart httpd
ssh server2 systemctl enable --now httpd
ssh server2 systemctl restart httpd
pcs resource create apache systemd:httpd op monitor interval=1min
pcs resource group add webgroup vip apache

Finish effect:

Insert picture description here

If server1 is down, the resources will automatically float to server2 to realize clustering

Insert picture description here
Insert picture description here

Guess you like

Origin blog.csdn.net/qq_42958401/article/details/109297154