Cluster architecture-LVS+Keepalived high-availability cluster construction

background

In this highly information-based IT era, the production system, business operations, sales and support, and daily management of enterprises are increasingly dependent on computer information and services, which has greatly increased the demand for high-availability (HA) technology applications. In order to provide continuous, uninterrupted computer system or network services.

1.Keepalived overview

1.1 Introduction to Keepalived tool

A health check tool specially designed for LVS and HA:

  • Support automatic failover (Failover);
  • Support node health check (Health Checking);
  • Official website: http://www.keepalived.org/

Insert picture description here

1.2 Keepalived implementation principle

  • Keepalived adopts VRRP (Virtual Routing Redundancy Protocol) hot backup protocol to realize the multi-machine hot backup function of Linux server in software.
    Insert picture description here

Principle analysis:

VRRP (Virtual Routing Redundancy Protocol) is a backup solution for routers

  • It is composed of multiple routers—a hot standby group, which provides external services through a shared virtual IP address;
  • There is only one main router in each hot standby group to provide services at the same time, and the other routers are in a redundant state;
  • If the currently online router fails, other routers will automatically take over the virtual IP address according to the set priority and continue to provide services

1.3 Keepalived installation and startup

  • When applied in the LVS cluster environment, the ipvsadm management tool is also needed;
  • YUM install Keepalived;
  • Enable Keepalived service.

2.Keepalived case

2.1 Case analysis

  • In enterprise applications, a single server bears the risk of a single point of failure in the application;
  • Once a single point of failure occurs, enterprise services will be interrupted, causing great harm.

Insert picture description here

2.2 Case explanation

  • The failover of dual-system hot backup is realized by the drift of virtual IP address, which is suitable for various application servers;
  • Realize dual-system hot backup based on Web services
    Drift address: 192.168.70.200
    Primary and standby servers: 192.168.70.9, 192.168.70.10
    Application services provided: Web
    Insert picture description here

3. Configure Keepalived server

3.1 Configure Keepalived master server

3.1.1 Configuration directory and main configuration file

  • The Keepalived configuration directory is located at /etc/keepalived/
  • keepalived.conf is the
    global_defs {...} section of the main configuration file that specifies global parameters; the
    vrrp_instance instance name {...} section refers to the VRRP hot standby parameters; the
    comment text starts with the "!" symbol; the
    directory samples provides many configuration examples as reference.

3.1.2 Common configuration options

  • router_id HA_TEST_R1: The name of the router (server);
  • vrrp_instance VI_1: Define VRRP hot standby instance;
  • state MASTER: Hot standby state, MASTER represents the master server;
  • interface ens33: the physical interface that carries the VIP address;
  • virtual_router_id 1: The ID number of the virtual router, which is consistent for each hot standby group;
  • priority 100: priority, the larger the value, the higher the priority, the default is 100;
  • advert_int 1: The number of seconds between notifications (heartbeat frequency);
  • auth_type PASS: authentication type;
  • auth_pass 123456: password string;
  • virtual_ipaddress {vip}: Specify a drift address (VIP), there can be more than one.

3.2 Configure Keepalived slave server

The configuration of the Keepalived backup server is different from the master configuration in three options:

  • router_id: set as your own name
  • state: set to BACKUP
  • priority: The priority value is lower than the primary server
  • Other options are the same as master

4. Keepalived dual machine hot backup effect test

Test the effect of dual machine hot backup:

  • Web services are enabled on both the main and standby machines, and different contents are set;
  • Disable and enable the network card of the main server successively.

Tests performed:

  • Test 1: Use ping to detect the connectivity of 192.168.70.200;
  • Test 2: Visit http://192.168.70.200 to confirm availability and content changes;
  • Test 3: Check the changes in the log file /var/log/messages.

5. Based on LVS+Keepalived high-availability cluster project

5.1 Project environment

(1) Two LVS load dispatch servers

  • IP address: 192.168.70.9
    Virtual-ip: 192.168.70.200
  • IP address: 192.168.70.10
    Virtual-ip: 192.168.70.200

(2) Two Web site servers

  • IP address: 192.168.70.11 (SERVER AA)
  • IP address: 192.168.70.12 (SERVER AB)
    Note: The gateway of the web server here does not need to point to the dispatcher network card

(3) One NFS shared server

  • IP address: 192.168.70.13

(4) One client computer for testing and verification

  • IP address: 192.168.70.14

Note: It is necessary to ensure that the same network segment can communicate with each other

5.2 Experiment purpose

  • The client accesses the virtual address 192.168.70.200, through the Keepalived hot standby main server, polling access to the Apache1 and Apache2 hosts; when the main server fails, the alternate server will replace it as the main server and realize the function of the main server
  • Build nfs network file storage service.

5.3 Project steps

5.3.1 Configure NFS storage server

[root@nfs ~]# rpm -qa | grep rpcbind		//默认虚拟机已安装rpcbind模块
rpcbind-0.2.0-42.el7.x86_64
[root@nfs ~]# yum -y install nfs-utils	//确认是否安装nfs-utils软件包
已加载插件:fastestmirror, langpacks
base                                                     | 3.6 kB     00:00     
Loading mirror speeds from cached hostfile
 * base: 
软件包 1:nfs-utils-1.3.0-0.48.el7.x86_64 已安装并且是最新版本
无须任何处理
[root@nfs ~]# mkdir /opt/web1
[root@nfs ~]# mkdir /opt/web2
[root@nfs ~]# echo "<h1>this is web1.</h1>" > /opt/web1/index.html
[root@nfs ~]# echo "<h1>this is web2.</h1>" > /opt/web2/index.html
[root@nfs ~]# vi /etc/exports
/opt/web1 192.168.70.11/32 (ro)
/opt/web2 192.168.70.12/32 (ro)
[root@nfs ~]# systemctl restart nfs
[root@nfs ~]# systemctl restart rpcbind
[root@nfs ~]# showmount -e
Export list for nfs:
/opt/web2 (everyone)
/opt/web1 (everyone)

5.3.2 Configure Web Site Server

  • Configuration on Web1
[root@web1 ~]# yum -y install httpd
[root@web1 ~]# showmount -e 192.168.70.13
Export list for 192.168.70.13:
/opt/web2 (everyone)
/opt/web1 (everyone)
[root@web1 ~]# mount 192.168.70.13:/opt/web1 /var/www/html
[root@web1 ~]# systemctl restart httpd
[root@web1 ~]# netstat -anpt | grep httpd
tcp6       0      0 :::80                   :::*                    LISTEN      55954/httpd   
[root@web1 ~]# vi web1.sh
#!/bin/bash
#lvs_dr模式 web1
ifconfig lo:0 192.168.70.200 broadcast 192.168.70.200 netmask 255.255.255.255 up
route add -host 192.168.70.200 dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p &>/dev/null

[root@web1 ~]# sh web1.sh
[root@web1 ~]# ifconfig		//查看虚拟端口
[root@web1 ~]# route -n 	//查看路由
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.70.10   0.0.0.0         UG    100    0        0 ens33
192.168.70.0    0.0.0.0         255.255.255.0   U     100    0        0 ens33
192.168.70.200  0.0.0.0         255.255.255.255 UH    0      0        0 lo
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

Insert picture description here
Insert picture description here

  • Configuration on Web2
[root@web2 ~]# yum -y install httpd
[root@web2 ~]# mount 192.168.70.13:/opt/web2 /var/www/html
[root@web2 ~]# systemctl start httpd
[root@web2 ~]# netstat -anpt | grep httpd
tcp6       0      0 :::80                   :::*                    LISTEN      54695/httpd 
[root@web2 ~]# vi web2.sh
#!/bin/bash
#lvs_dr模式 web2
ifconfig lo:0 192.168.70.200 broadcast 192.168.70.200 netmask 255.255.255.255 up
route add -host 192.168.70.200 dev lo:0
echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore
echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore
echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce
echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce
sysctl -p &>/dev/null

[root@web2 ~]# sh web2.sh
[root@web2 ~]# ifconfig		//查看虚拟端口
[root@web2 ~]# route -n 	//查看路由
kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.70.10   0.0.0.0         UG    100    0        0 ens33
192.168.70.0    0.0.0.0         255.255.255.0   U     100    0        0 ens33
192.168.70.200  0.0.0.0         255.255.255.255 UH    0      0        0 lo
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

Insert picture description here
Insert picture description here

5.3.3 Configure LVS scheduler

Import the keepalived software package
Insert picture description here
1) Perform the following same configuration on the two LVS schedulers

  • Load the ip_vs module
[root@lvs ~]# modprobe ip_vs     	  '//加载ip_vs模块'
[root@lvs ~]# cat /proc/net/ip_vs      '//查看ip_vs版本信息'
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port Forward Weight ActiveConn InActConn
  • Install ipvsadm
[root@lvs ~]# yum -y install ipvsadm
  • Configure the relevant environment, compile and install
[root@lvs ~]# yum -y install gcc gcc-c++ make popt-devel kernel-devel openssl-devel
[root@lvs ~]# tar zxvf keepalived-2.0.13.tar.gz
[root@lvs ~]# cd keepalived-2.0.13/
[root@lvs keepalived-2.0.13]# ./configure --prefix=/
[root@lvs keepalived-2.0.13]# make && make install
[root@lvs keepalived-2.0.13]# cp keepalived/etc/init.d/keepalived /etc/init.d/
[root@lvs keepalived-2.0.13]# systemctl enable keepalived.service
Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.

2) Configuration on LVS_1

[root@lvs_1 ~]# vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    
    
	router_id lvs_1
}
vrrp_instance vi_1 {
    
    
	state MASTER
	interface ens33
	virtual_router_id 51
	priority 110
	advert int 1
	authentication {
    
    
	 auth_type PASS
	 auth_pass 6666
	}
	virtual_ipaddress {
    
    
	 192.168.70.200
	}
}
virtual_server 192.168.70.200 80 {
    
    
	delay_loop 6
	lb_algo rr
	lb_kind DR
	persistence_timeout 6
	protocol TCP
real_server 192.168.70.11 80 {
    
    
	weight 1
	TCP_CHECK {
    
    
	 connect_port 80
	 connect_timeout 3
	 nb_get_retry 3
	 delay_before_retry 3
	}
}
real_server 192.168.70.12 80 {
    
    
	weight 1
	TCP_CHECK {
    
    
	 connect_port 80
	 connect_timeout 3
	 nb_get_retry 3
	 delay_before_retry 3
	}
}
}
[root@lvs_1 ~]# systemctl start keepalived.service
[root@lvs_1 ~]# ipaddr

Insert picture description here

[root@lvs_1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.70.200:80 rr
  -> 192.168.70.11:80             Route   1      0          0         
  -> 192.168.70.12:80             Route   1      0          0 

3) Configuration on LVS_2

[root@lvs_2 ~]# vi /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    
    
        router_id lvs_2
}
vrrp_instance vi_1 {
    
    
        state BACKUP
        interface ens33
        virtual_router_id 51
        priority 105
......//其他的和LVS_1配置相同

[root@lvs_2 ~]# systemctl start keepalived.service
[root@lvs_2 ~]# systemctl status keepalived.service

Insert picture description here

[root@lvs_2 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.70.200:80 rr persistent 6
  -> 192.168.70.11:80             Route   1      0          0         
  -> 192.168.70.12:80             Route   1      0          0

5.4 Verification results

  • Verify on the client, access the browser
    Insert picture description here
    Insert picture description here

  • View scheduling details

Insert picture description here
Insert picture description here

  • When the main server fails
[root@lvs_1 ~]# systemctl stop keepalived.service

Insert picture description here

Insert picture description here

Insert picture description here
Insert picture description here

Guess you like

Origin blog.csdn.net/weixin_42449832/article/details/110940851
Recommended