[DNS polling]---[Nginx+keepalived high availability]---[application layer{apache(mycloud),tomcat(monitor)}]---[storage layer(mysql)+master-slave replication]--- NFS share

Directory
One. Project description 1
1.1 Logic topology diagram 1
1.2 Project description 1
2. Project deployment 2

  1. Database deployment 2
    1.1 Deployment environment 2
    1.2 Install database 2
    1.3 Master-slave replication 3
    1.4 Atlas load balancing realization 4
  2. apache+php deployment 7
    2.1 install httpd (the operation of the two machines is the same) 7
    2.2 install php 8
    2.3 configure apache to analyze php 8
    2.4 deploy Discuz forum 9
  3. Tomcat deployment (the operation of the two machines is the same) 10
    3.1 jdk environment installation 10
    3.2 tomcat installation 10
    3.3 monitoring application deployment 10
  4. Nginx deployment (the two machines have the same operation) 12
    4.1 Install nginx 12
    4.2 Install monitor 13
    4.3 Configure apache, tomcat reverse proxy (two machines are the same) 13
  5. Keepalived realizes two nginx high availability 16
    5.1 Configure four application servers (the operation is the same) 16
    5.2 keepalived deployment (master) 17
    5.2 keepalived deployment (slave) 19
  6. DNS polling nginx to achieve load balancing 21

One. Project description
1.1 Logical topology diagram

Insert picture description here

DNS modification. Both nginx match a url to achieve load balancing by polling nginx.
The nginx configuration file sets upstream to achieve load balancing, location matching to achieve dynamic and static separation, two nginx use keepalived to achieve high availability, and install monitor services.
The back-end storage end is a MySQL database, master-slave replication is done, and a server is set up to do read-write separation (mycat, atlas...).
The Discuz forum service is installed on the two apache machines, and the back-end server is bound.
The two tomcat machines install the wgcloud monitoring service and bind the back-end server.
Start a server to do NFS, create four directories to mount, and mount application layer services.

1.2 Project description
1 192.168.9.8 DNS+nginx
2 192.168.9.9 DNS+nginx
3 192.168.9.10 apache+php
4 192.168.9.11 apache+php
5 192.168.9.12 tomcat
6 192.168.9.13 tomcat
7 192.168.9.14 mysql-master
8 192.168 .9.15 mysql-slave
9 192.168.9.16 Atlas
10 192.168.9.17 NFS

two. Project deployment
1. Database deployment
1.1 Deployment environment
Master MySQL 192.168.9.14 msql-m Install database
from MySQL 192.168.9.15 mysql-s
1.2
Master-slave operation
Insert picture description here

[root@mysql-m tools]# tar xf mysql-5.7.22-linux-glibc2.12-x86_64.tar.gz  -C /usr/src/
[root@mysql-m tools]# echo $?
0
[root@mysql-m tools]# ln -s /usr/src/mysql-5.7.22-linux-glibc2.12-x86_64/ /usr/local/mysql
[root@mysql-m tools]# echo "export PATH=$PATH:/usr/local/mysql/bin/">> /etc/profile
[root@mysql-m tools]# source /etc/profile
[root@mysql-m tools]# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/mysql/bin/
[root@mysql-m tools]# useradd -M -s /sbin/nologin mysql
[root@mysql-m tools]# mysqld --user=mysql --initialize --datadir=/usr/local/mysql/data

Insert picture description here

Remember password

[root@mysql-m tools]# vim /etc/my.cnf

Note that server-id cannot be the same

[client]
port = 3306
socket = /tmp/mysql.sock
 
[mysqld]
server-id = 1
port = 3306
basedir = /usr/local/mysql
datadir = /usr/local/mysql/data
[root@mysql-m tools]# /etc/init.d/mysqld start
[root@mysql-m tools]# mysql -uroot -p"q8jM>?dkuURo"
mysql> set password=password('123123');
mysql> exit

1.3 master-slave replication
mysql-m configuration
host add configuration file content

[root@mysql-m ~]# vim /etc/my.cnf
[client]
port = 3306
socket = /tmp/mysql.sock
 
[mysqld]
server-id = 1
port = 3306
log-bin=/usr/local/mysql/data/bin-log
basedir = /usr/local/mysql
datadir = /usr/local/mysql/data
[root@mysql-m ~]# /etc/init.d/mysqld restart
[root@mysql-m ~]# mysql -uroot -p123123
mysql> grant replication slave on *.* to 'repl'@'192.168.9.15' identified by '123123';
mysql> show master status\G;

Insert picture description here

mysql-s configuration from the library

[root@mysql-s tools]# mysql -uroot -p123123
mysql> change master to master_host='192.168.9.14',
    -> master_user='repl',
    -> master_password='123123',
    -> master_log_file='bin-log.000001',
    -> master_log_pos=154;
mysql> start slave;
mysql> show slave status\G;   没有两个yes就多刷两遍

Insert picture description here

At this point, the master-slave replication is complete.

1.4Atlas load balancing implementation
Insert picture description here

[root@mysql-m tools]# rpm -ivh Atlas-2.2.1.el6.x86_64.rpm

![在这里插入图片描述](https://img-blog.csdnimg.cn/20210201173002356.png)

[root@mysql-m ~]# cd /usr/local/mysql-proxy/bin/
[root@mysql-m bin]# ./encrypt 123123
++gAN07C/Q0=
此密码将在配置文件中使用

[root@mysql-m ~]# cd /usr/local/mysql-proxy/conf/
[root@mysql-m conf]# ls
test.cnf
[root@mysql-m conf]# vim test.cnf 
1
[mysql-proxy]

#带#号的为非必需的配置项目

#管理接口的用户名
admin-username = user

#管理接口的密码
admin-password = pwd

#Atlas后端连接的MySQL主库的IP和端口,可设置多项,用逗号分隔
proxy-backend-addresses = 192.168.9.14:3306

#Atlas后端连接的MySQL从库的IP和端口,@后面的数字代表权重,用来作负载均衡,若省略则默认为1,可设置多项,用逗号分隔
proxy-read-only-backend-addresses = 192.168.9.15:3306

#用户名与其对应的加密过的MySQL密码,密码使用PREFIX/bin目录下的加密程序encrypt加密,下行的user1和user2为示例,将其替换为你的MySQL的用户名和加密密码!
pwds = repl:++gAN07C/Q0=

#设置Atlas的运行方式,设为true时为守护进程方式,设为false时为前台方式,一般开发调试时设为false,线上运行时设为true,true后面不能有空格。
daemon = true

#设置Atlas的运行方式,设为true时Atlas会启动两个进程,一个为monitor,一个为worker,monitor在worker意外退出后会自动将其重启,设为false时只有worker,没有monitor,一般开发调试时设为false,线上运行时设为true,true后面不能有空格。
keepalive = true

#工作线程数,对Atlas的性能有很大影响,可根据情况适当设置
event-threads = 8

#日志级别,分为message、warning、critical、error、debug五个级别
log-level = message

#日志存放的路径
log-path = /usr/local/mysql-proxy/log

#SQL日志的开关,可设置为OFF、ON、REALTIME,OFF代表不记录SQL日志,ON代表记录SQL日志,REALTIME代表记录SQL日志且实时写入磁盘,默认为OFF
sql-log = ON

#慢日志输出设置。当设置了该参数时,则日志只输出执行时间超过sql-log-slow(单位:ms)的日志记录。不设置该参数则输出全部日志。
#sql-log-slow = 10

#实例名称,用于同一台机器上多个Atlas实例间的区分
#instance = test

#Atlas监听的工作接口IP和端口
proxy-address = 0.0.0.0:1234

#Atlas监听的管理接口IP和端口
admin-address = 0.0.0.0:2345

#分表设置,此例中person为库名,mt为表名,id为分表字段,3为子表数量,可设置多项,以逗号分隔,若不分表则不需要设置该项
#tables = person.mt.id.3

#默认字符集,设置该项后客户端不再需要执行SET NAMES语句
charset = utf8

#允许连接Atlas的客户端的IP,可以是精确IP,也可以是IP段,以逗号分隔,若不设置该项则允许所有IP连接,否则只允许列表中的IP连接
#client-ips = 127.0.0.1, 192.168.1

#Atlas前面挂接的LVS的物理网卡的IP(注意不是虚IP),若有LVS且设置了client-ips则此项必须设置,否则可以不设置
#lvs-ips = 192.168.1.1
[root@atlas bin]# cd /usr/local/mysql-proxy/
[root@atlas mysql-proxy]# ./bin/mysql-proxyd start
[root@atlas mysql-proxy]# mysql -uuser -ppwd -h 127.0.0.1 -P2345

2. apache+php deployment
2.1 install httpd (the operation of the two machines is the same)
Insert picture description here

yum install gcc gcc-c++ make pcre-devel expat-devel perl wget vim -y

[root@apache-php1 tools]# tar xf apr-1.6.5.tar.gz -C /usr/src/
[root@apache-php1 tools]# tar xf apr-util-1.6.1.tar.gz -C /usr/src/
[root@apache-php1 tools]# tar xf httpd-2.4.38.tar.gz -C /usr/src/
[root@apache-php1 tools]# cd /usr/src/
[root@apache-php1 src]# ls
[root@apache-php1 src]# mv  apr-1.6.5/   httpd-2.4.38/srclib/apr
[root@apache-php1 src]# mv  apr-util-1.6.1/   httpd-2.4.38/srclib/apr-util
[root@apache-php1 src]# cd httpd-2.4.38/
[root@apache-php1 httpd-2.4.38]# ./configure --prefix=/usr/local/httpd --enable-charset-lite --enable-rewrite --enable-cgi --enable-so && make && make install
[root@apache-php1 tools]# useradd -M -s /sbin/nologin apache
[root@apache-php1 tools]# chown -R apache.apache /usr/local/httpd/
[root@apache-php1 tools]# sed -i 's/User daemon/User apache/' /usr/local/httpd/conf/httpd.conf
[root@apache-php1 tools]# sed -i 's/Group daemon/Group apache/' /usr/local/httpd/conf/httpd.conf
修改servername
[root@apache-php1 tools]# sed -i '/#ServerName/ s/#//' /usr/local/httpd/conf/httpd.conf
增加系统命令变量路径
[root@apache-php1 tools]# echo 'PATH=$PATH:/usr/local/httpd/bin/' >> /etc/profile
[root@apache-php1 tools]# . /etc/profile
创建启动脚本并设置开机自启
[root@apache-php1 tools]# cp /usr/local/httpd/bin/apachectl /etc/init.d/httpd
[root@apache-php1 tools]# sed -i '1a# chkconfig: 35 85 15' /etc/init.d/httpd
[root@apache-php1 tools]# head -2 /etc/init.d/httpd
#!/bin/sh
# chkconfig: 35 85 15
[root@apache-php1 tools]# chkconfig --add httpd
[root@apache-php1 tools]# chmod +x /etc/rc.d/rc.local
[root@apache-php1 tools]# echo "/etc/init.d/httpd start" >> /etc/rc.d/rc.local
启动服务测试
[root@apache-php1 tools]# /etc/init.d/httpd start
[root@apache-php1 tools]# netstat -anpt | grep :80
tcp6       0      0 :::80                   :::*                    LISTEN      64521/httpd  
[root@apache-php1 tools]# curl 192.168.1.66
<html><body><h1>It works!</h1></body></html>

2.2 Install php

[root@apache-php1 tools]# yum -y install libjpeg-devel libpng-devel freetype-devel libxml2-devel zlib-devel curl-devel libicu-devel openssl-devel
[root@apache-php1 tools]# tar xf php-7.3.2.tar.gz -C /usr/src/
[root@apache-php1 tools]# cd /usr/src/php-7.3.2/
[root@apache-php1 ./configure  --prefix=/usr/local/php  --with-apxs2=/usr/local/httpd/bin/apxs  --with-mysql-sock=/tmp/mysql.sock  --with-pdo-mysql  --with-mysqli  --with-zlib  --with-curl  --with-gd  --with-jpeg-dir  --with-png-dir  --with-freetype-dir  --with-openssl  --enable-mbstring  --enable-xml  --enable-session  --enable-ftp  --enable-pdo  --enable-tokenizer  --enable-intl && make && make install
创建php.ini配置文件并配置数据库接口
[root@apache-php1 php-7.3.2]# cp php.ini-development /usr/local/php/lib/php.ini
[root@apache-php1 php-7.3.2]# vim /usr/local/php/lib/php.ini
1175 mysqli.default_socket = /tmp/mysql.sock

2.3 configure apache to parse php

[root@apache-php1 php-7.3.2]# vim /usr/local/httpd/conf/httpd.conf
……
258 <IfModule dir_module>
259     DirectoryIndex index.html index.php
260 </IfModule>
……
395     AddType application/x-compress .Z
396     AddType application/x-gzip .gz .tgz
397     AddType application/x-httpd-php .php
398     AddType application/x-httpd-php-source .phps
……
[root@apache-php1 php-7.3.2]# httpd -t
[root@apache-php1 php-7.3.2]# /etc/init.d/httpd stop
[root@apache-php1 php-7.3.2]# /etc/init.d/httpd start

2.4 Deploy Discuz Forum

[root@apache-php2 ~]# yum -y install unzip
[root@apache-php2 ~]#unzip Discuz_X3.3_SC_UTF8.zip
[root@apache-php2 ~]# mv upload/* /usr/local/httpd/htdocs/
[root@apache-php2 ~]#chown -R apache:apache /usr/local/httpd/htdocs
[root@apache-php1 tools]# chmod +777 /usr/local/httpd/htdocs/

Insert picture description here

Log in to the database

[root@mysql-m ~]# /etc/init.d/mysqld start
[root@mysql-m ~]# mysql -uroot -p123123
mysql>  create database discuz;
mysql> grant all on *.* to 'discuzer'@'192.168.9.%' identified by '123123';

Insert picture description here

3. Tomcat deployment (the operation of the two machines is the same)
3.1jdk environment installation

[root@tomcat1 tools]# tar xf jdk-8u60-linux-x64.tar.gz
[root@tomcat1 tools]# mv jdk1.8.0_60/ /usr/local/java8
[root@tomcat1 tools]# vim /etc/profile.d/java.sh
[root@tomcat1 tools]# source /etc/profile
[root@tomcat1 tools]# java -version

Insert picture description here

3.2tomcat installation

[root@tomcat2 tools]# tar xf  apache-tomcat-8.5.38.tar.gz
[root@tomcat2 tools]# mv apache-tomcat-8.5.38 /usr/local/tomcat
[root@tomcat2 tools]# /usr/local/tomcat/bin/startup.sh

Insert picture description here

3.3 Monitoring application deployment
https://www.wgstart.com/docs13.html //I did not download it
myself, unzip the compressed package, and
log in to the MySQL server

[root@mysql-m ~]# mv wgcloud.sql /home/
[root@mysql-m ~]# mysql -uroot -p123123
mysql> use wgcloud;
mysql> set names utf8;
mysql> source /home/wgcloud.sql;   //压缩包里面有

[root@tomcat1 tools]# yum -y install unzip
[root@tomcat1 tools]# unzip wgcloud-master.zip 
[root@tomcat1 tools]# rm -rf /usr/local/tomcat/webapps/ROOT/*
[root@tomcat1 tools]# mv tools/wgcloud/* /usr/local/tomcat/webapps/ROOT/
[root@tomcat1 tools]# cd /usr/local/tomcat/webapps/ROOT
[root@tomcat1 tools]# ls
[root@tomcat1 ROOT]# cd wgcloud-server/
[root@tomcat1 wgcloud-server]# cd resources/
[root@tomcat1 resources]# vim application.yml 


[root@tomcat1 resources]# cd ..
[root@tomcat1 wgcloud-server]# cd ..
[root@tomcat1 ROOT]# cd wgcloud-agent/
[root@tomcat1 wgcloud-agent]# ls
[root@tomcat1 wgcloud-agent]# cd resources/
[root@tomcat1 resources]# ls
application.yml  logback-spring.xml
[root@tomcat1 resources]# vim application.yml 

![在这里插入图片描述](https://img-blog.csdnimg.cn/20210201173225687.png)

![在这里插入图片描述](https://img-blog.csdnimg.cn/20210201173232802.png)

![在这里插入图片描述](https://img-blog.csdnimg.cn/20210201173251381.png)

[root@tomcat1 webapps]# cd ROOT/
[root@tomcat1 ROOT]# ls
[root@tomcat1 agent]# bash start.sh 
[root@tomcat1 server]# bash start.sh 

If you can't access 9999, just visit this url: http://192.168.9.12:9999/wgcloud/login/toLogin
Insert picture description here

4. nginx deployment (the two machines have the same operation)
4.1 install nginx

[root@nginx1~]# yum -y install pcre-devel zlib-devel openssl-devel
[root@nginx ~]# useradd -M -s /sbin/nologin nginx
[root@nginx ~]# tail -1 /etc/passwd;tail -1 /etc/group
[root@nginx ~]# cd tools/
[root@nginx tools]# tar xf nginx-1.6.0.tar.gz -C /usr/src/
[root@nginx tools]# cd /usr/src/nginx-1.6.0/
[root@nginx nginx-1.6.0]# ./configure --prefix=/usr/local/nginx  --user=nginx --group=nginx --with-file-aio  --with-http_mp4_module  --with-http_ssl_module && make && make install
[root@nginx1 ~]# ln -s /usr/local/nginx/sbin/* /usr/local/sbin/
[root@nginx1 ~]# nginx -t
[root@nginx1 ~]# ss -atp|grep nginx
[root@nginx1 ~]# curl -I http://192.168.9.8

4.2 install monitor

[root@nginx1]# yum -y install unzip
[root@nginx1]# unzip monitor-master.zip 
[root@nginx1 ~]# rm -rf /usr/local/nginx/html/*
[root@nginx1 ~]# mv tools/monitor/* /usr/local/nginx/html/

Visit test:
Insert picture description here

4.3 Configure apache and tomcat reverse proxy (the two are the same)

[root@nginx1 ]# vim /usr/local/nginx/conf/nginx.conf
user  nginx;
worker_processes  4;
error_log  logs/error.log  info;
pid        logs/nginx.pid;
events {
	use epoll;
    worker_connections  1024;
}
http {
    include       mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';
 access_log  logs/access.log  main;
sendfile        on;
    keepalive_timeout  65;
gzip  on;
upstream apache {
		server 192.168.9.10:80;
		server 192.168.9.11:80;
	}
	upstream tomcat {
		server 192.168.9.12:9999;
		server 192.168.9.13:9999;
	}
    server {
        listen       80;
        server_name  localhost;
        charset koi8-r;
        access_log  logs/host.access.log  main;
        location / {
            root   html;
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
        location ~ \.php$ {
            proxy_pass  http://apache;
			proxy_set_header Host $host;
			proxy_set_header X-forwarded-for $proxy_add_x_forwarded_for;
			proxy_set_header X-real-ip $remote_addr;
       }

        location /data/ {
            proxy_pass  http://apache;
			proxy_set_header Host $host;
			proxy_set_header X-forwarded-for $proxy_add_x_forwarded_for;
			proxy_set_header X-real-ip $remote_addr;
       }
        location /static/ {
            proxy_pass  http://apache;
			proxy_set_header Host $host;
			proxy_set_header X-forwarded-for $proxy_add_x_forwarded_for;
			proxy_set_header X-real-ip $remote_addr;
       }
		location ~\/toLogin$ {
	        proxy_pass  http://tomcat;
			proxy_set_header Host $host;
			proxy_set_header X-forwarded-for $proxy_add_x_forwarded_for;
			proxy_set_header X-real-ip $remote_addr;
		}
		location  ~ .*\.(js|css|jpg|png)$ {
	        proxy_pass  http://tomcat;
			proxy_set_header Host $host;
			proxy_set_header X-forwarded-for $proxy_add_x_forwarded_for;
			proxy_set_header X-real-ip $remote_addr;
		}
    }
}

[root@nginx1 conf]# nginx -t
[root@nginx1 conf]# nginx -s reload

Installed on nginx
Insert picture description here

Search 192.168.9.8/index.php 192.168.9.9/index.php
Insert picture description here

Search http://192.168.9.8/wgcloud/login/toLogin
Insert picture description here

5. Keepalived achieves high availability of two nginx

5.1 Configure four application servers (the operation is the same)

[root@httpd-php1 ~]# vim /opt/lvs-dr
#!/bin/bash 
# lvs-dr 
VIP="192.168.9.66"
/sbin/ifconfig lo:vip $VIP broadcast $VIP netmask 255.255.255.255
/sbin/route add -host $VIP dev lo:vip
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce

[root@nginx1 ~]# chmod +x /opt/lvs-dr 
[root@apache-php ~]# /opt/lvs-dr 
[root@apache-php ~]# echo "/opt/lvs-dr" >> /etc/rc.local 
[root@apache-php ~]# ip a

[root@apache-php ~]# route -n

[root@apache-php ~]# scp /opt/lvs-dr  192.168.9.11:/opt/      //12,13,14也是一样
[root@nginx2 ~]# chmod +x /opt/lvs-dr
[root@nginx2 ~]# /opt/lvs-dr
[root@nginx2 ~]# ip a
[root@nginx2 ~]# route -n
[root@nginx2 ~]# echo "/opt/lvs-dr" >> /etc/rc.local

Insert picture description here
Insert picture description here

5.2 keepalived deployment (main)

[root@nginx1 ~]# modprobe ip_vs
[root@nginx1 ~]# cat /proc/net/ip_vs
[root@nginx1 ~]# yum -y install keepalived ipvsadm
[root@nginx1 ~]# cd /etc/keepalived/
[root@nginx1 keepalived]# cp keepalived.conf{,.ori}
[root@nginx1 keepalived]# vim keepalived.conf

! Configuration File for keepalived
global_defs {
	[email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_MASTER
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.9.66
    }
}
virtual_server 192.168.9.66 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP
    real_server 192.168.9.10 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
			connect_port 80
        }
    }
    real_server 192.168.9.11 80 {
        weight 1
        TCP_CHECK {
            onnect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
			connect_port 80
        }
    }
    real_server 192.168.9.12 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
			connect_port 80
        }
    }
    real_server 192.168.9.13 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
			connect_port 80
        }
    }
}

To correct the above error,
keepalived is doing high availability of two nginx, so only need to be the master and slave of each other, no load balancing, no need to configure virtual real IP. The second nginx host can modify the router to be different. The two states are changed to the opposite of the first one, one slave and one master. The priority is also changed to the opposite.

! Configuration File for keepalived

global_defs {
     [email protected]
   }
   notification_email_from root@lvsdr hannibal.com
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_MASTER
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.9.66
    }
}
vrrp_instance VI_1 {
    state SLAVE
    interface ens33
    virtual_router_id 52
    priority 99
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.9.88
    }
}

[root@nginx1 keepalived]# systemctl start keepalived
[root@nginx1 keepalived]# ipvsadm -ln

[root@nginx1 keepalived]# ip a

Insert picture description here

5.2keepalived deployment (slave)

[root@nginx2 ~]# yum -y install ipvsadm keepalived
[root@nginx2 ~]# cp /etc/keepalived/keepalived.conf{,.ori}
[root@nginx2	~]#	scp	192.168.9.8:/etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf
[root@nginx2 ~]# vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived
global_defs {
	[email protected]
   }
   notification_email_from [email protected]
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_SLAVE
   vrrp_skip_check_adv_addr
   vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}
vrrp_instance VI_1 {
    state SLAVE
    interface ens33
    virtual_router_id 51
    priority 99
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.9.66
    }
}
virtual_server 192.168.9.66 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    persistence_timeout 50
    protocol TCP
    real_server 192.168.9.8 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
			connect_port 80
        }
    }
    real_server 192.168.9.9 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
			connect_port 80
        }
    }
    real_server 192.168.9.12 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
			connect_port 80
        }
    }
    real_server 192.168.9.13 80 {
        weight 1
        TCP_CHECK {
            connect_timeout 10
            nb_get_retry 3
            delay_before_retry 3
			connect_port 80
        }
    }
}

[root@nginx2 ~]# systemctl start keepalived
[root@nginx2 ~]# ip a

5.3 Script realizes that nginx and keepalived die together.
Note that keepalived will not automatically end the process after nginx is down. High availability is invalid. So edit the script to set keepaloved and kill the process after nginx is down. The content of the script is as follows:

#!/bin/bash
while :;do
	pidof nginx &> /dev/null
	if [ $? -ne 0 ];then
		/usr/local/nginx/sbin/nginx &> /dev/null
		pidof nginx $> /dev/null
		if [ $? -ne 0 ];then
			/usr/bin/systemctl stop keepalived
		else
			pidof keepalived &> /dev/null
			if [ $? -ne 0 ];then
				/usr/bin/systemctl start keepalived
			fi
		else
			pidof keepalived &> /dev/null
			if [ $? -ne 0 ];then
				/usr/bin/systemctl start keepalived
			fi
		fi
	fi
	sleep 3
done

Insert picture description here
Insert picture description here

6.DNS polling nginx to achieve load balancing

6.1dns installation

[root@nginx1 ~]# yum  -y install bind bind-utils bind-chroot bind-libs
[root@nginx1 named]# rpm -qa | grep bind

Insert picture description here

6.2 Modify the configuration file
DNS server prefer DNS to choose your own IP address

vim /etc/resolves

[root@ng1 ~]# vim /etc/named.conf 

[root@dns ~]# named-checkconf /etc/named.conf
[root@ng1 ~]# cd /var/named
[root@ng1 named]# cp -p named.empty yun220.com.zone
[root@ng1 named]# vim yun220.com.zone

Insert picture description here

[root@dns named]# named-checkzone yun220.com yun220.com.zone 

Insert picture description hereInsert picture description here

Guess you like

Origin blog.csdn.net/qq_39109226/article/details/113524916