Background: Why do this thing, to write this article
The domain of our project points to 172.22.90.239, but in fact we did this machine, which is a virtual ip, only 172.22.90.230, why the request 239 actually points to 230 machines?
And communicated prior to operation and maintenance, they do not remember. Old-dimensional shipped a few years ago to do. They just listen to the mouth of a noun: keepalived & VIP, Baidu a wave of learning.
Practical operation process
Physical machine is ready: 172.22.90.171 and 172.22.90.170 two
1. Install keepalived (two to be fitted):
yum install keepalived
2. modify the configuration file, delete /etc/keepalived/keepalived.conf, and then re-create a /etc/keepalived/keepalived.conf
we /etc/keepalived/keepalived.conf
172.22.90.171 configuration:
{global_defs notification_email { [email protected] # fault contact } notification_email_from [email protected] # fault sender smtp_server 127.0 . 0.1 smtp_connect_timeout 30 the router_id of the modifications to LVS_MASTER #BACKUP LVS_BACKUP } vrrp_script chk_nginx { Script " / etc / keepalived / nginx_check .sh " script path # nginx state detection interval the . 1 # detection interval lS weight - 2 # script if condition is established, the weight - 2 } vrrp_instance VI_1 { state MASTER # service state; the MASTER (operating state) the BACKUP (standby) interface bond0 # binding the VIP card virtual_router_id 74 # virtual routing ID, host, the standby node must be consistent mcast_src_ip 172.22 . 90.171 # local IP higher priority nopreempt # setting, abnormal responses to solve the problem again preempted priority 100 # priority; range: 0 ~ 254 ; the mASTER> the bACKUP advert_int . 1 # multicast transmission interval, and backup nodes must be consistent, the default lS authentication verification information {# ; backup nodes must be consistent auth_type PASS # VRRP authentication type, PASS, AH two kinds auth_pass1111 # the VRRP authentication password, at the same vrrp_instance, the master and slave must use the same password for communication } track_script {# The track_script block added instance configuration block chk_nginx # perform Nginx monitored service } virtual_ipaddress {# virtual IP pool, a main , standby node must be consistent, you may define a plurality of the VIP 172.22 . 90.237 # virtual the IP } }
172.22.90.170 configuration:
{global_defs notification_email { [email protected] # fault contact } notification_email_from [email protected] # fault sender smtp_server 127.0 . 0.1 smtp_connect_timeout 30 the router_id of the modifications to LVS_BACKUP #BACKUP LVS_BACKUP } vrrp_script chk_nginx { Script " / etc / keepalived / nginx_check .sh " script path # nginx state detection interval the . 1 # detection interval lS weight - 2 # script if the condition is satisfied, - weights 2 } vrrp_instance VI_1 { state BACKUP # service state; the MASTER (operating state) the BACKUP (standby) interface bond0 # binding the VIP card, to see if the card is currently used ifconfig under virtual_router_id 74 # virtual routing ID, host, the standby node must be consistent, indicates a cluster within mcast_src_ip 172.22 . 90.170 # native IP nopreempt # high priority setting to address the issue again after the abnormal responses to seize the priority 90 # priority; the range: 0 ~ 254 ; MASTER> BACKUP, if all BACKUP, that is, large value is a front advert_int . 1 # multicast transmission interval, and backup nodes must be consistent, the default lS authentication verification information {#; backup nodes must be consistent auth_type PASS # VRRP authentication type, PASS, AH two kinds AUTH_PASS 1111 # the VRRP authentication password, at the same vrrp_instance, the master and slave must use the same password for communication } track_script {# instance added to a block configuration block track_script perform chk_nginx # Nginx monitored service } virtual_ipaddress {# virtual IP pool, and backup nodes must be consistent, may define a plurality of the VIP 172.22 . 90.237 # virtual IP } }
3. After modifying the configuration files, restart the two machines keepalived:
systemctl restart keepalived
4, finally can test wave
Lord stopped keepalived service: systemctl stop keepalived cut close nginx test wave
Finally, we emphasize that: If the domain name is bound to a nginx, single point of failure unsafe, can be used to make highly available keepalived, binding two machines.
End!