nginx + docker + nfs deployment

One. Architecture

In Keepalived + Nginx availability load balancing architecture, keepalived responsible for implementing High-availability (HA) function control headend VIP (virtual network address), when a device fails, the hot standby server can instantly be switched over automatically VIP, actual experience in operation only two seconds to switch time, DNS service can be responsible for front-end load balancing VIP's.
nginx is responsible for controlling the back-end web server load balancing, the client's requests to the Real Server back-end processing according to a certain algorithm, and the Real Server will respond directly back to the client.
nfs server to do real-time backup, web server to provide web interface.

two. Simple principle

NGINX_MASTER, NGINX_BACKUP both servers through keepalived software to ens33 card tie a virtual IP (VIP) address 192.168.1.40, who is currently carrying this VIP service is bound to who ens32, when NGINX_MASTER failure, NGINX_BACKUP heartbeat time will be set by /etc/keepalived/keepalived.conf file advert_int 1 inspection, can not get NGINX_MASTER normal state, then, NGINX_BACKUP will instantly bind VIP to take over the work of nginx_master, when NGINX_MASTER recovery keepalived parameters will determine priority by priority the right virtual VIP address 192.168.1.40 rebind to NGINX_MASTER of ens33 card.
The advantages of using this scheme
1. To achieve architecture may be elasticized, when the pressure increase can be added to temporarily add a web server to which the architecture;
2.upstream having load balancing, can automatically determine the rear end of the machine, and automatically kicked out of the machine can not normally provide services;
3. lvs relative terms, the regular distribution and redirect more flexible. And can ensure the effectiveness of a single Keepalvied nginx load balancer, single point of failure;
4. nginx used to do load balancing, on the rear end of the machine without any changes.
5.nginx deployed in the docker container, that is, save a lot of development, testing, deployment time, and can quickly restore service through the mirror in case of failure.

Third, the system environment

Two load machine installation:, nginx + docker + nfs named: NGINX_MASTER, NGINX_BACKUP.
Back-end web servers, web services may be provided in any architecture, were named as: WEB_1, WEB_2.
Any back-end database machine architecture, as long as you can provide database services.

nginx + docker + nfs deployment

server IP addresses install software
NGINX_MASTER 192.168.1.10 nginx+keepalived
NGINX_BACKUP 192.168.1.20 nginx+keepalived
WEB_1 192.168.1.11 docker+nginx
WEB_2 192.168.1.13 docker+nginx
nfs_MASTER 192.168.1.30 nfs+rsync+inotify
nfs_BACKUP 192.168.1.10 nfs+rsync+inotify

nginx deployment (both are)

Install nginx

[root@nginx01 ~]# tar zxf nginx-1.14.0.tar.gz 
//解压nginx安装包
[root@nginx01 ~]# cd nginx-1.14.0/
[root@nginx01 nginx-1.14.0]# yum -y install openssl-devel pcre-devel zlib-devel
//安装nginx依赖包
[root@nginx01 nginx-1.14.0]# ./configure --prefix=/usr/local/nginx1.14 --with-http_dav_module --with-http_stub_status_module --with-http_addition_module  --with-http_sub_module --with-http_flv_module --with-http_mp4_module --with-pcre --with-http_ssl_module --with-http_gzip_static_module --user=nginx --group=nginx && make  &&  make install
//编译安装nginx
[root@nginx01 nginx-1.14.0]# useradd nginx -s /sbin/nologin -M
//创建所需用户
[root@nginx01 nginx-1.14.0]# ln -s /usr/local/nginx1.14/sbin/nginx /usr/local/sbin/
//链接命令
[root@nginx01 nginx-1.14.0]# nginx 
//开启nginx
[root@nginx01 nginx-1.14.0]# netstat -anpt | grep nginx
//查看nginx是否开启

nginx + docker + nfs deployment

Nginx deployment

[root@nginx01 ~]# cd /usr/local/nginx1.14/conf/
[root@nginx01 conf]# vim nginx.conf

http module plus

upstream backend {
server 192.168.1.11:90 weight=1 max_fails=2 fail_timeout=10s;
server 192.168.1.13:90 weight=1 max_fails=2 fail_timeout=10s;
}
    location / {
       # root   html;
       # index  index.html index.htm;
       proxy_pass http://backend;  #添加
    }

High availability environment

Installation keepalived

[root@nginx02 nginx-1.14.0]# yum -y install keepalived

Configuration keepalived

Keepalived /etc/keepalived/keepalived.conf modify the configuration file on the primary and standby file server nginx

主nginx

/Etc/keepalived/keepalived.conf modify files in the main nginx

! Configuration File for keepalived
global_defs {
   router_id LVS_DEVEL
}   
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }   
    virtual_ipaddress {
        192.168.1.40
    }
}

Preparation nginx

Preparation modifications nginx / etc / keepalived /keepalived.conf file

When configuring apparatus nginx: need to modify the state of BACKUP, priority lower than MASTER, consistent values ​​of master and virtual_router_id

! Configuration File for keepalived
global_defs {
   router_id TWO
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 1
    priority 99
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.40
    }
}

Test (when done docker's)

Nginx both primary and backup start keepalived

systemctl  start  keepalived
[root@nginx01 conf]# curl 192.168.1.40
wsd666

nfs deployment (both are)

nfs operation

[root@localhost ~]# yum -y install nfs-utils
//下载nfs服务

[root@nfs ~]# mkdir /database
//创建共享目录
[root@nfs02 ~]# chmod 777 /database/
//设置权限
[root@nfs ~]# vim /etc/exports
//设置权限如下
/database *(rw,sync,no_root_squash)

Open the service

[root@nfs ~]# systemctl start rpcbind
[root@nfs ~]# systemctl enable rpcbind
[root@nfs ~]# systemctl start nfs-server
[root@nfs ~]# systemctl enable nfs-server

docker01 and docker02 test nfs

[root@nfs01 ~]# vim /etc/rsyncd.conf 
//建立rsync配置文件
uid = nobody
gid = nobody
use chroot = yes
address = 192.168.1.30
port 873
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
hosts allow = 192.168.1.0/24
[wwwroot]
path = /database
read only = no
dont compress = *.gz *.bz2 *.rar *.zip
[root@nfs01 ~]# mkdir /database
//创建共享目录
[root@nfs01 ~]# rsync --daemon
//启动rsync
[root@nfs01 ~]# netstat -anpt | grep rsync
//查看端口

nginx + docker + nfs deployment

If you need to restart rsync service, you need:

[root@localhost ~]# kill  $(cat /var/run/rsyncd.pid)
//停止服务
[root@localhost ~]# rsync --daemon
//启动服务
[root@localhost ~]# kill -9 $(cat /var/run/rsyncd.pid)

Or directly use the "netstat -anpt | grep rsync" command to find out the process ID, use the "kill process number" the same.
The first method to stop rsync rsync service must delete the file storage service process:

[root@localhost ~]# rm -rf /var/run/rsyncd.pid

Use rsync backup tool

Once you've configured rsync synchronization source server, the client can then use to perform remote synchronization tool rsync.

Sync with rsync host
rsync命令的选项:
-r:递归模式,包含目录及子目录中所有文件
-l:对于符号链接文件仍然复制为符号链接文件
-p:保留文件的权限标记
-t:保留文件的时间标记
-g:保留文件的属组标记(仅超级用户使用)
-o:保留文件的属主标记(仅超级用户使用)
-D:保留设备文件及其他特殊文件
-a:归档模式,递归并保留对象属性,等同于 -rlptgoD
-v:显示同步过程的详细(verbose)信息
-z:在传输文件时进行压缩(compress)
-H:保留硬连接文件
-A:保留ACL属性信息
--delete:删除目标位置有而原始位置没有的文件
--checksum:根据对象的校验和来决定是否跳过文件

rsync is a fast incremental backup tool support:
(1) local replication;
(2) synchronized with other SSH;
(3) synchronized with rsync host.

Manual sync with rsync host
[root@localhost ~]# rsync -avz 192.168.1.1::wwwroot /root
或者
[root@localhost ~]# rsync -avz rsync://192.168.1.1/wwwroot /root
[root@nfs01 database]# vim index.html
xgp666
//创建测试目录

Configuring inotify + rsync real-time synchronization (all two)

(1), software installation

rpm -q rsync //查询rsync是否安装,一般为系统自带安装
yum install rsync -y //若没有安装,使用yum安装

Installation package inotify

[root@nfs02 ~]# tar zxf inotify-tools-3.14.tar.gz 
[root@nfs02 ~]# cd inotify-tools-3.14/
[root@nfs02 inotify-tools-3.14]#  ./configure && make && make install

(2) adjusting kernel parameters inotify

[root@nfs02 ~]# vim /etc/sysctl.conf
fs.inotify.max_queued_events = 16384
fs.inotify.max_user_instances = 1024
fs.inotify.max_user_watches = 1048576

[root@nfs02 ~]# sysctl -p
//生效

(3) write trigger synchronization scripts

#!/bin/bash
A="inotifywait -mrq -e modify,move,create,delete /database/"
B="rsync -avz  /database/ 192.168.1.40::wwwroot"
$A | while read DIRECTORY EVENT FILE
do
    if [ $(pgrep rsync | wc -l) -gt 0 ] ; then
        $B
    fi
done

It should be noted here, between the two servers need to synchronize the directory, the directory also need permission to put the maximum to avoid being given permission to the directory itself.

[root@nfs01 inotify-tools-3.14]# chmod  +x /opt/ino.sh

The setup script boot from Kai

[root@nfs01 database]# vim /etc/rc.d/rc.local 
/opt/ino.sh &
/usr/bin/rsync --daemon

Source server-side testing

  • After executing the script, the current terminal will become a real-time monitoring interface, you need to re-open the terminal operation.
  • Performed at the source server sharing module directory file operations, then go to the backup server can be observed real-time files have been synchronized.

docker deployment (both are)

[root@docker01 ~]# docker pull nginx
[root@docker01 ~]# mkdir -p  /www  
//创建挂载目录

After you've created nfs mount directory on docker

[root@docker01 ~]#  mount  -t nfs 192.168.1.30:/database /www
[root@docker01 ~]# docker run -itd --name nginx -p 90:80 -v /www/index.html:/usr/share/nginx/html/index.html nginx:latest

test

1, when NGINX_MASTER, NGINX_BACKUP server nginx are working properly
on NGINX_MASTER:
nginx + docker + nfs deployment
on NGINX_BACKUP:
nginx + docker + nfs deployment
Master server NIC ens32 normal binding VIP, while the backup was not bound by the normal browser can access the site.
2, close NGINX_MASTER of nginx container
nginx + docker + nfs deployment
when the container nginx stop immediately and start up, the startup script nginx no problem
3, close NGINX_MASTER of keepalived service
on NGINX_MASTER:
nginx + docker + nfs deployment
on NGINX_BACKUP:
nginx + docker + nfs deployment
NGINX_BACKUP of ens32 card has an instant binding VIP, visit the website normal browser.
4, the NGINX_MASTER of keepalived service starts
on NGINX_MASTER:
nginx + docker + nfs deployment
on NGINX_BACKUP:
nginx + docker + nfs deployment
NGINX_MASTER of ens32 card rebinding VIP, through a browser to access the site properly.
5, turn off WEB_1 server, visit the Web site through a normal browser.

Troubleshooting

First, check whether nginx configuration file in question
whether the parameters of the normal two keepakived
nginx on docker is mapped port, mount the nfs shared directory.
nfs is set directory permissions. It is configured rsync + inotify, write a shell to do real-time backup.

to sum up:

The first is a mirror, a mirror is pulled nginx. Then put nginx mirror rebuild it, is to become what we need, it is to change the main configuration file. Then push all mirrors to harbor

Build nginx, do the reverse proxy.
Build a docker, install nginx mirror to do the test page to do the test surface is shared from nfs come.
Set up NFS, in order to achieve data sharing, including the database, it is persistent. But also through rsync + inotify, do real-time backup.

Guess you like

Origin blog.51cto.com/14320361/2460662