Based keepalived achieve VIP transfer, lvs, nginx high availability

Based keepalived achieve VIP transfer, lvs, nginx high availability

A, Keepalived high availability cluster solutions

Two, VRRP finite state machine

Third, the use keepalived achieve the switchover from the VIP

Fourth, implement custom notification when the state transitions,

Fifth, to achieve load balancing

Six: nginx achieve high availability

A, Keepalived high availability cluster solutions

183501583.png

The original birth ipvs provide for highly available, after receiving information than the master node in realserver back end, keepalived can call their own ipvsadm command generation rules, can automatically, as well as the master node VIP ipvs rules "take over" application from the node, continue to customer service. Also enable the state of health of the back-end realserver do testing.

After keepalived start on one node, will generate a Master main process, the main process will generate two sub-processes, which are VRRP Stack (achieve vrrp protocol) Checkers (Health Check ipvs realserver back-end detection)

Two, VRRP finite state machine

183601870.png

After both sides VRRP nodes have started to implement state transitions, just started when the initial state is BACKUP, then send notification to other nodes, as well as its priority information, who high priority, is converted to MASTER, otherwise, or BACKUP, this time in the state to start service on the MASTER node, provide services for users, if the node hung up, then converted to BACKUP, priority is reduced, converted to another node MASTER, rising priority , the service starts at this node, VIP, VMAC will be transferred to the node, providing services to users,

lab environment:

Web Hosting Version:

CentOS6.4-i686

Two nodes:

node1.limian.com 172.16.6.1

node2.limian.com 172.16.6.10

ready

1, a node:

synchronised time:

[root@node1 ~]# ntpdate 172.16.0.1

Installation keepalived

[root@node1 ~]# yum -y install keepalived

2, two nodes do the same job

Third, the use keepalived achieve the switchover from the VIP

3.1 We modifications keepalived configuration file:

[root @ node1 ~] # cd / etc / keepalived / 
[root @ node1 keepalived] # cp keepalived.conf keepalived.conf.back // give the configuration file back up what 
[root @ node1 keepalived] # vim keepalived.conf

3.2 Global Stage

{global_defs 
   notification_email {// definition of mail service 
        root @ localhost // define the recipient, to the machine herein, but test uses 
   } 
   notification_email_from kaadmin @ // localhost defined sender, 
   smtp_server defined // mail server 127.0.0.1 , you must not use the external address 
   smtp_connect_timeout 30 // timeout 
   the router_id of the LVS_DEVEL                        
}

3.3 Defining vrrp stage

 

vrrp_instance VI_1 {// definition of virtual routing, VI_1 virtual route identifier, the name of their own definition of 
    the state MASTER // open, the priority of the node is a higher priority than another node, the MASTER state into 
    interface eth0 // All notices and other information from this interface eth0 out 
    virtual_router_id 7 // ID of the virtual routing, and this virtual MAC ID is also the source of the last paragraph, this ID number is generally not greater than 255, and the ID must not conflict 
    priority 100 // initial priority 
    number advert_int 1 // advertised 
    authentication {// authentication mechanism 
        auth_type PASS // authentication type 
        auth_pass 1111 // password, a random character string should 
    } 
    virtual_ipaddress {// virtual address, i.e., the VIP 
        172.16.6.100    
    } 
}

We master node configuration file is modified Well, you need to copy from a node, do the appropriate changes ready for use

[root@node1 keepalived]# scp keepalived.conf 172.16.6.1:/etc/keepalived/

3.4 Log on to the node;

[node2 the root @ ~] # CD / etc / keepalived / 
[the root @ node2 keepalived] # Vim keepalived.conf 
vrrp_instance VI_1 { 
    State // modify the BACKUP status from the node, the master node to the MASTER, it is the node from the BACKUP 
    interface eth0 
    virtual_router_id . 7 
    priority 99 // modify the priority, the priority attention must be less than the master node is node 
    advert_int. 1 
    authentication { 
        AUTH_TYPE the PASS 
        AUTH_PASS 1111 
    } 
    virtual_ipaddress { 
        172.16.6.100 
    } 
}

3.5 the master node then start the service

[root @ node1 keepalived] # Service keepalived Start 
[root @ node1 ~] # ip addr Show // Check out our definition VIP

184053712.png

3.6 from the start the service node

[root@node2 keepalived]# service keepalived start

The service on the primary node stopped, VIP will not look to the node

[root@node2 ~]# ip addr show

184225729.png

3.7 start the service on the primary node

 

[the root @ node1 ~] # Start-Service keepalived 
[the root @ node1 ~] // # Show IP addr detection result found VIP transferred to the master node

Note:

 

By default ARRP work in "preemptive mode", if you find a service node stops, another node immediately VIP and VMAC "take it over", if under the "non-preemptive mode", whether your priority too high, stop a service node, another node will not "grab" VIP and VMAC, unless this node hung up, the two nodes will "grab."

 

Fourth, implement custom notification when the state transitions,

 

4.1 This need to rely on a script to complete

 

The master node

 

[root@node1 ~]# cd /etc/keepalived/
[root@node1 keepalived]# vim notify.sh   //编写脚本
#!/bin/bash
vip=172.16.6.100
contact='root@localhost'
thisip=`ifconfig eth0 | awk '/inet addr:/{print $2}' | awk -F: '{print $2}'`
Notify() {
    mailsubject="$thisip is to bi $vip master"
    mailbody="vrrp transaction, $vip floated to $thisip"
    echo $mailbody | mail -s "$mailsubject" $contact
}
case "$1" in
    master)
        notify master
        exit 0
    ;;
    backup)
            notify backup
        exit 0
    ;;
    fault)
        notify fault
        exit 0
    ;;
    *)
        echo 'Usage: `basename $0` {master|backup|fault}'
        exit 1
    ;;
esac
[root@node1 keepalived]# chmod +x notify.sh
[root@node1 keepalived]# ./notify.sh master
[root@node1 keepalived]# mail   //查看有没有收到通知
Heirloom Mail version 12.4 7/29/08.  Type ? for help.
"/var/spool/mail/root": 1 message 1 new
>N  1 root                  Wed Sep 25 14:54  18/668   "172.16.6.10 is to bi 172.16.6.100 mas"
&

Check whether the transition state will be notified

 

[root@node1 keepalived]# ./notify.sh backup
[root@node1 keepalived]# ./notify.sh fault
[root@node1 keepalived]# mail
Heirloom Mail version 12.4 7/29/08.  Type ? for help.
"/var/spool/mail/root": 3 messages 2 new
    1 root                  Wed Sep 25 14:54  19/679   "172.16.6.10 is to bi 172.16.6.100 mas"
>N  2 root                  Wed Sep 25 14:57  18/668   "172.16.6.10 is to bi 172.16.6.100 mas"
 N  3 root                  Wed Sep 25 14:57  18/668   "172.16.6.10 is to bi 172.16.6.100 mas"
&

Description script to work, then go to edit the configuration file

 

[root@node1 keepalived]# vim keepalived.conf

Add in the global stage

 

vrrp_script chk_mantaince_down {// manual control script may define a 
   Script "[[-f / etc / keepalived / Down]] && Exit Exit. 1 || 0" 
   interval The time interval. 1 // Check 
   weight -2 // If the test fails, priority of -2 
}

Add the following lines in vrrp stage

 

track_script {     //引用定义的脚本
       chk_mantaince_down
    }
  notify_master"/etc/keepalived/notify.sh master"
   notify_backup"/etc/keepalived/notify.sh backup"
   notify_fault"/etc/keepalived/notify.sh fault"

 

4.2 Copy the script to another node,

 

[root@node1 keepalived]# scp notify.sh 172.16.6.1:/etc/keepalived/

And a position corresponding to the same contents is added in the configuration file

 

Both nodes to restart the service

 

4.3 Let the master node becomes a slave node

root@node1 keepalived]# touch down

By monitoring, found that immediately becomes the master node from the node, and receive an e-mail

 

[root@node1 keepalived]# tail -f /var/log/messages
You have new mail in /var/spool/mail/root

Fifth, to achieve load balancing

 

5.1 edit the configuration file

 

[root @ node1 keepalived] # vim keepalived.conf 
##### load balancing stage ################# 
virtual_server 172.16.6.100 80 {// specify the VIP and port 
    delay_loop 6 // how many cycles delayed restart the service, the service doing the detection 
    lb_algo rr loadbalance load balance algorithm 
    lb_kind DR types 
    nat_mask 255.255.0.0 mask 
    persistence_timeout 0 persistent connection time 
    protocol TCP // protocol 
    real_server 172.16.6.11 80 {// definition of the rear end realserver attribute 
        weight. 1 
HTTP_GET {// defines method of 
            URL url {// detected 
              path / 
status_code result 200 is acquired status code // 
            } 
            connect_timeout the connection time. 3 // 
            nb_get_retry 3 // attempts
            delay_before_retry 3 // wait before trying each connection 
        } 
} 
real_server 172.16.6.12 80 {// attributes defined in the rear end realserver 
        weight. 1 
HTTP_GET {// definition of methods of detecting 
            url {// detected the URL 
              path / 
status_code // 200 is acquisition result status code 
            } 
            connect_timeout the connection time. 3 // 
            nb_get_retry 3 // attempts 
            delay_before_retry 3 // wait before trying each connection 
        } 
    } 
}

5.2, on the node from doing the same modification

 

5.3 restart the service generates and detects whether the rules with command ipvsadm

 

[root@node1 keepalived]# service keepalived restart
[root@node1 keepalived]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  172.16.6.100:80 rr
[root@node1 keepalived]#

But why no two realserver we define it? That is because it did not start the virtual machine, health check did not pass, it will not show up, we went to start a virtual machine, and start the service.

 

And execute the following command, do lvs load balancing DR model

 

#ifconfig lo:0 172.16.6.11 broadcast 172.16.6.11 netmask 255.255.255.255 up
#route add -host 172.16.6.11 dev lo:0
#echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
#echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
#echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
#echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce

Note:

 

1, the rear end realserver number multiple may be added, however, needs the appropriate configuration file in the configuration in the master node, and executes the related commands to add realserver

 

2, although keepalived ipvs can add their own rules, load balancing, but can not achieve static and dynamic separation, in a production environment we have to choose the best solution according to the scene.

 

Six: nginx achieve high availability

 

6.1 precondition

 

On both nodes installed nginx service, and make sure not enabled httpd

 

# Netstat -tunlp // make sure that port 80 is not occupied

 

# service nginx start

 

6.2 nginx edit each page of a node, in order to effect more intuitive

 

[root @ node1 ~] # vim /usr/share/nginx/html/index.html // node. 1 
172.16.6.10 
[node2 the root @ ~] // node # Vim /usr/share/nginx/html/index.html 2 
172.16.6.1

6.3 nginx ensure normal access

 

184735882.png

 

6.4 then stopped service,

 

[root @ node1 keepalived] # vim notify.sh // modify the script, you can monitor nginx service, and you can activate or deactivate the service 
################## 
Case "$ 1 "in 
    Master) 
        the Notify Master 
        /etc/rc.d/init.d/nginx Start 
        Exit 0 
    ;; 
    Backup) 
        the Notify Backup 
        /etc/rc.d/init.d/nginx STOP 
        Exit 0 
    ;; 
    Fault) 
        the Notify Fault 
        / etc STOP /rc.d/init.d/nginx 
        Exit 0 
    ;; 
################################### ###

6.5 synchronization script to node 2

 

[root@node1 keepalived]# scp notify.sh 172.16.6.1:/etc/keepalived/

6.6 the primary node

 

[root @ node1 keepalived] # Touch Down 
[root @ node1 keepalived] #SS -tunl found 80 // port is not listening 
[root @ node1 keepalived] # RM -f Down 
[root @ node1 keepalived] #SS -tunl // 80 port has been found that listening
Original articles published 0 · won praise 0 · Views 529

Guess you like

Origin blog.csdn.net/qingdao666666/article/details/104805486