Load balancing high availability web cluster based on nginx

Load balancing high availability web cluster based on nginx

Preface

project name

Load balancing high availability web cluster based on nginx

Project environment

9 centos7.9 servers, nginx 1.21.1, ab, nfs, zabbix, keepalived 2.1.5, ansible 2.9.27

project description

Build an https load balancing web cluster project based on nginx, use ansible to write playbooks to implement project environment deployment, use keepalived to achieve dual VIP load balancing, use DNS servers to implement VIP polling, use NFS to maintain web page data consistency, and use Prometheus+ grafana implements data visualization monitoring.

![Insert image description here](https://img-blog.csdnimg.cn/d47f50e180564f58a8a511fc868a32f2.png

Project steps

1. Use 3 servers as the back-end real-server to provide real web services, and use playbook to quickly deploy the environment; 2.
Build an nfs server to ensure the data consistency of the website, mount shared files on the web server and set up automatic startup Mount;
3. Use 2 servers as load balancers, call the minimum number of connections through nginx to achieve load balancing, and deploy keepalived to achieve dual VIP high availability and health detection; 4. Build a
dns server to implement domain name resolution and dual VIP polling;
5 .Build Prometheus+grafana monitoring to achieve data visualization.

1. Establish a password channel and deploy ansible

1. Install ansible:

[root@ansible ~]# yum install epel-release -y
[root@ansible ~]# yum install ansible -y
#查看版本
[root@ansible ~]# ansible --version
ansible 2.9.27

2. Establish secret-free channels with the ansible server on all servers:

Establish secret-free channels on all machines that require ansible: web servers, load balancers, nfs servers...

[root@ansible ~]# ssh-keygen 
[root@ansible ~]# cd /root/.ssh/
[root@ansible .ssh]# ls
id_rsa  id_rsa.pub
[root@ansible .ssh]# ssh-copy-id -i id_rsa.pub [email protected].*

3. Create a host list:

[root@ansible /]# cd /etc/ansible/
[root@ansible ansible]# ls
ansible.cfg  hosts  roles
[root@ansible ansible]# vim hosts 
[LBservers]
192.168.78.134          #负载均衡1
192.168.78.133          #负载均衡2

[webservers]
192.168.78.130          #web1
192.168.78.131          #web2
192.168.78.135          #web3

[nfsservers]
192.168.78.136          #nfs

[dnsservers]
192.168.78.137          #dns

4. Write playbook to batch install software:

[root@ansible playbooks]# vim install.yaml 
---
- hosts: LBservers      #在负载均衡器上安装nginx,keepalived
  remote_user: root
  tasks:
  - name: install nginx
    script: /etc/ansible/playbooks/one_key_install_nginx.sh     #执行一键安装脚本
  - name: install keepalived
    yum: name=keepalived state=latest

- hosts: webservers     #在web服务器上安装nginx,nfs-utils
  remote_user: root
  tasks:
  - name: install nginx
    script: /etc/ansible/playbooks/one_key_install_nginx.sh
  - name: install nfs
    yum: name=nfs-utils state=latest

- hosts: webservers LBservers    #在web服务器,负载均衡器上安装node_exporter,以便于Prometheus服务器采集数据
  remote_user: root
  tasks:
  - name: install node_exporter
    script: /etc/ansible/playbooks/one_key_install_node_exporter.sh

The one-click installation nginx script is as follows one_key_install_nginx.sh:

#!/bin/bash

mkdir -p /my_nginx
cd /my_nginx

# 下载nginx压缩包
curl -O http://nginx.org/download/nginx-1.23.3.tar.gz

# 解压
tar xf nginx-1.23.3.tar.gz
# 进入文件夹
cd nginx-1.23.3

# 新建用户,用于启动nginx进程
useradd -s /sbin/nologin ly

# 安装依赖包,ssh相关、gcc为编译需要、pcre正则相关、make编译相关
yum install -y openssl openssl-devel gcc pcre pcre-devel automake make net-tools vim

# configure是一个配置的脚本文件,会根据指定的配置生成一个Makefile文件,这个文件会影响后面make命令的编译,相当于图纸
# configure可配置参数可以参考官方文档:http://nginx.org/en/docs/configure.html
# 常用选项:
# --with-*:开启某个功能,默认不安装    --without-*:禁用某个功能,默认安装
# --prefix=path:指定路径                       --conf-path=path:指定配置文件路径,不指定会放到prefix路径下
# --user=name:指定启动nginx worker进程的用户
# --with-http_ssl_moudle 开启https的功能,下载ssl来进行公钥和私钥的认证
# --without-http——memcached_moudle 禁用http_memcached
# --with-http_realip_module 启用realip功能,让后端知道通过代理访问的用户的ip
# --with-http_v2_module:对http2.0版本的支持
# --with-threads:开启线程池功能                --with-http_stub_status_moudle:开启nginx状态统计功能,可以知道多少访问
# --with-stream 支持tcp/udp反向代理,即4层负载均衡
./configure --prefix=/usr/local/nginx --user=ly --with-http_ssl_module --with-http_realip_module --with-http_v2_module --with-threads --with-http_stub_status_module --with-stream

# 编译
# make是按照Makefile的配置去编译程序为二进制文件,二进制文件就是执行可以运行的程序
# -j:指定编译进程数,多点速度快些,可以使用top后按1查看虚拟机配有几个核心
make -j2
# 将编译好的二进制文件复制到指定安装路径目录下
make install

# 启动nginx
/usr/local/nginx/sbin/nginx

# 修改PATH变量
PATH=$PATH:/usr/local/nginx/sbin
echo "PATH=$PATH" >>/root/.bashrc

# 设置nginx的开机启动,rc.local是指向rc.d/rc.local的链接,后者需要拥有执行权限才能开机自启
echo "/usr/local/nginx/sbin/nginx" >>/etc/rc.local
chmod +x /etc/rc.d/rc.local

# selinux和firewalld防火墙都关闭
# selinux临时和永久关闭
setenforce 0
sed -i '/^SELINUX=/ s/enforcing/disabled/' /etc/selinux/config

# 防火墙临时和永久关闭
service firewalld stop
systemctl disable firewalld

The one-click installation node_exporter script is as follows one_key_install_node_exporter.sh:

#!/bin/bash

# 解压
tar xf /root/node_exporter-1.4.0-rc.0.linux-amd64.tar.gz -C /root
mv /root/node_exporter-1.4.0-rc.0.linux-amd64/*  /node_exporter

# 修改PATH变量
PATH=/node_exporter/:$PATH
echo "PATH=$PATH" >>/root/.bashrc
chmod +x /etc/rc.d/rc.local

#执行node exporter 代理程序,设置端口号,不要与已有服务起冲突
nohup node_exporter --web.listen-address 0.0.0.0:8090  &

5. Run the playbook to quickly deploy the environment:

[root@ansible playbooks]# ansible-playbook install.yaml 
PLAY [LBservers] ****************************************************************************************************************************************************

TASK [Gathering Facts] **********************************************************************************************************************************************
ok: [192.168.78.133]
ok: [192.168.78.134]

TASK [install nginx] ************************************************************************************************************************************************
changed: [192.168.78.133]
changed: [192.168.78.134]

TASK [install keepalived] *******************************************************************************************************************************************
changed: [192.168.78.134]
changed: [192.168.78.133]

PLAY [webservers] ***************************************************************************************************************************************************

TASK [Gathering Facts] **********************************************************************************************************************************************
ok: [192.168.78.135]
ok: [192.168.78.131]
ok: [192.168.78.130]

TASK [install nginx] ************************************************************************************************************************************************
changed: [192.168.78.130]
changed: [192.168.78.131]
changed: [192.168.78.135]

TASK [install nfs] **************************************************************************************************************************************************
changed: [192.168.78.131]
changed: [192.168.78.130]
changed: [192.168.78.135]
[WARNING]: Could not match supplied host pattern, ignoring: LBserver

PLAY [webservers LBserver] ******************************************************************************************************************************************

TASK [Gathering Facts] **********************************************************************************************************************************************
ok: [192.168.78.131]
ok: [192.168.78.130]
ok: [192.168.78.135]

TASK [install node_exporter] ****************************************************************************************************************************************
changed: [192.168.78.130]
changed: [192.168.78.131]
changed: [192.168.78.135]

PLAY RECAP **********************************************************************************************************************************************************
192.168.78.130             : ok=5    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.78.131             : ok=5    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.78.133             : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.78.134             : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
192.168.78.135             : ok=5    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

An error was found: [WARNING]: Could not match supplied host pattern, ignoring: LBserver, This is because the LBserver host list was not added to the /etc/ansible/hosts file. After inspection, it was found that it was a writing error in the playbook.

2. Configure the web server

Enable status statistics function and hide nginx version number

worker_processes  2;   #启动2个worker 进程

events {
    
    
    worker_connections  1024;	#一个worker进程允许1024个用户访问
}

http {
    
    
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;
    
    sendfile        on;
    keepalive_timeout  65;

    server_tokens off;          #隐藏nginx版本
    
    server {
    
    
        listen       80;        
        server_name  localhost;
    
        location / {
    
     
            root   html;
            index  index.html index.htm;
        }
            
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
    
    
            root   html;             
        }
        
        location ~ /status {
    
                          #访问位置为/status
            stub_status   on;                        #打开状态统计功能
            access_log off;                         #关闭此位置的日志记录
        }    
            
    }       
        
}

3. Configure NFS server

Allow file sharing between three web servers through the network to maintain web page data consistency

1. Install and start

[root@nfs ~]# yum install nfs-utils -y
[root@nfs ~]# service nfs-server restart
Redirecting to /bin/systemctl restart nfs-server.service

2. Edit shared files/etc/export

[root@nfs ~]# vim /etc/exports
[root@nfs ~]# cat /etc/exports
/web 192.168.78.0/24(ro,all_squash,sync) 	#共享目录/web,78网段全部可以访问,只读权限,实时共享
[root@nfs ~]# mkdir /web
[root@nfs ~]# cd /web/
[root@nfs web]# mkdir sc
[root@nfs web]# vim index.html
[root@nfs web]# ls
index.html  sc
[root@nfs web]# exportfs -rv		#刷新nfs服务
exporting 192.168.78.0/24:/web
[root@nfs web]# service firewalld stop		#关闭防火墙
Redirecting to /bin/systemctl stop firewalld.service

3. Mount and use shared folders on other web servers

[root@localhost web1]# mount 192.168.78.136:/web 	/usr/local/nginx/html

4. Automatically mount nfs file system when booting

Modify the configuration file on the web server /etc/rc.localand mount nfs on boot

[root@web1 html]# vim /etc/rc.local
service nfs-server start
mount  192.168.0.139:/web   /usr/local/nginx/html
[root@web1 html]# chmod +x /etc/rc.d/rc.local

This ensures that all web servers have the same page

4. Load balancing

1. Modify the ngin configuration file on the LB load balancer, call the minimum number of connections algorithm, and enable HTTPS:

worker_processes  2;

events {
    
    
    worker_connections  1024;
}


http {
    
    
    include       mime.types;
    default_type  application/octet-stream;

	server_tokens off;          #隐藏nginx版本
    sendfile        on;
    keepalive_timeout  65;
    upstream  LB{
    
    
        least_conn;                     #调用最小连接数
        server  192.168.78.130;         #web1
        server  192.168.78.131;         #web2
        server  192.168.78.135;         #web3
        }
    server {
    
    
        listen       80;
        #把http的域名请求转成https
        return 301 https://www.yilong.love;
    }

    # HTTPS server
    server {
    
    
        listen       443 ssl;
        server_name  www.yilong.com;	#申请的域名

        ssl_certificate      9993605_yilong.love.pem;	#ssl证书
        ssl_certificate_key  9993605_yilong.love.key;

        ssl_session_cache    shared:SSL:1m;
        ssl_session_timeout  5m;

        ssl_ciphers  HIGH:!aNULL:!MD5;
        ssl_prefer_server_ciphers  on;

        location / {
    
    
            proxy_pass http://LB;       #代理转发到LB组
            proxy_set_header   X-Real-IP     $remote_addr;      #添加日志字段,让后端服务器知道客户机的IP

        }
    }

2. Edit the log format on the web server, add the X-Real-IP field, and obtain the user IP:

	log_format  main  '$remote_addr -  $HTTP_X_REAL_IP - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  logs/access.log  main;

5. Deploy keepalived dual VIP load balancing server

1. Modify the configuration file on the LB1 load balancer /etc/keepalived/keepalived.conf:

! Configuration File for keepalived

global_defs {
    
    
   notification_email {
    
    
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    
    
    state MASTER	#做master角色
    interface ens33
    virtual_router_id 220
    priority 120	#优先级
    advert_int 1
    authentication {
    
    
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    
    		#vip1
        192.168.78.199
    }
}

vrrp_instance VI_2 {
    
    
    state BACKUP
    interface ens33
    virtual_router_id 221
    priority 100
    advert_int 1
    authentication {
    
    
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    
    		#vip2
        192.168.78.188
    }
}

2. Modify the configuration file on the LB2 load balancer /etc/keepalived/keepalived.conf:

! Configuration File for keepalived

global_defs {
    
    
   notification_email {
    
    
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL
   vrrp_skip_check_adv_addr
   #vrrp_strict
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    
    
    state BACKUP
    interface ens33
    virtual_router_id 220
    priority 100
    advert_int 1
    authentication {
    
    
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    
    
        192.168.78.199
    }
}

vrrp_instance VI_2 {
    
    
    state MASTER
    interface ens33
    virtual_router_id 221
    priority 120
    advert_int 1
    authentication {
    
    
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    
    
        192.168.78.188
    }
}

Start 2 vrrp instances. Enable 2 vrrp instances on each machine. One is the master and the other is the backup. Enable 2 VIPs. There will be a VIP on each machine. Both VIPs provide services to the outside world. This way It can avoid the situation of single VIP, one is very busy and the other is very idle. It can improve the usage of the device. The configuration files on LB1 and LB2 are mutually active and backup.

3.Health testing

The value of keepalived is based on the condition that nginx can work normally. If nginx is abnormal, this machine is no longer a load balancer. It needs to stop its master status, lower its priority, and give way to other machines. There needs to be a health detection function behind it.
1. Write a script on two load balancers to monitor whether nginx is running, and grant executable permissions:

[root@lb-1 nginx]# cat check_nginx.sh 
#!/bin/bash
#检测nginx是否正常运行
if  /usr/sbin/pidof  nginx  ;then
	exit 0
else
	exit 1
fi
[root@lb-1 nginx]# chmod +x check_nginx.sh 

2. Define the monitoring script in keepalived and call it:
the call needs to be defined in the master role.
Modify LB1 /etc/keepalived/keepalived.conf:

#定义监控脚本chk_nginx
vrrp_script chk_nginx {
    
    
#当脚本/nginx/check_nginx.sh脚本执行返回值为0的时候,不执行下面的weight  -30的操作,只有脚本执行失败,返回值非0的时候,就执行执行权重值减30的操作
script "/my_nginx/check_nginx.sh"
interval 1      #健康检查周期
weight -30
}

vrrp_instance VI_1 {
    
    
    state MASTER
    interface ens33
    virtual_router_id 220
    priority 120
    advert_int 1
    authentication {
    
    
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    
    
        192.168.78.199
    }
    #调用监控脚本
    track_script {
    
    
        chk_nginx
    }   
}

Modify LB2 /etc/keepalived/keepalived.conf:

#定义监控脚本chk_nginx
vrrp_script chk_nginx {
    
    
#当脚本/my_nginx/check_nginx.sh脚本执行返回值为0的时候,不执行下面的weight  -30的操作,只有脚本执行失败,返回值非0的时候,就执行执行权重值减30的操作
script "/my_nginx/check_nginx.sh"
interval 1
weight -30
}
vrrp_instance VI_2 {
    
    
    state MASTER
    interface ens33
    virtual_router_id 221
    priority 120
    advert_int 1
    authentication {
    
    
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
    
    
        192.168.78.188
    }
    #调用监控脚本
    track_script {
    
    
        chk_nginx
    }   
}

6. Build DNS server

1. Turn off the firewall and SELINUX, and install bind*:

[root@dns ~]# systemctl stop firewalld
[root@dns ~]# vim /etc/selinux/config 
SELINUX=disabled
[root@dns ~]# yum install bind* -y

2. Set the named service to start at boot, and start the DNS service immediately:

[root@dns ~]# systemctl enable named
Created symlink from /etc/systemd/system/multi-user.target.wants/named.service to /usr/lib/systemd/system/named.service.
[root@dns ~]# systemctl start named		#立马启动named进程
[root@dns ~]# netstat -anplut|grep named
tcp        0      0 127.0.0.1:953           0.0.0.0:*               LISTEN      4987/named          
tcp        0      0 127.0.0.1:53            0.0.0.0:*               LISTEN      4987/named          
tcp6       0      0 ::1:953                 :::*                    LISTEN      4987/named          
tcp6       0      0 ::1:53                  :::*                    LISTEN      4987/named          
udp        0      0 127.0.0.1:53            0.0.0.0:*                           4987/named          
udp6       0      0 ::1:53                  :::*                                4987/named 

3. Modify the dns service to the local interface:

[root@dns ~]# cat /etc/resolv.conf 
# Generated by NetworkManager
nameserver 127.0.0.1	#127.0.0.1代表本地回环接口地址,任何机器都有,只能本机使用

4. Modify the configuration file to allow other machines to query the dns domain name:

[root@dns ~]# vim /etc/named.conf
options {
    
    
        listen-on port 53 {
    
     any; };     #127.0.0.1修改为any,任意
        listen-on-v6 port 53 {
    
     any; };  #修改
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        recursing-file  "/var/named/data/named.recursing";
        secroots-file   "/var/named/data/named.secroots";
        allow-query     {
    
     any; };       #修改
[root@dns ~]# service named restart
Redirecting to /bin/systemctl restart named.service

5. Modify the configuration file and tell named to provide domain name resolution for yilong.love

[root@dns ~]# vim /etc/named.rfc1912.zones
#添加内容
zone "yilong.love" IN {
    
    
        type master;
        file "yilong.love.zone";
        allow-updateyilong.love {
    
     none; };
};

/var/named6. Create the data file of yilong.love in the data directory of DNS domain name resolution :

[root@dns named]# pwd
/var/named
[root@dns named]# ls
chroot  chroot_sdb  data  dynamic  dyndb-ldap  named.ca  named.empty  named.localhost  named.loopback  slaves  yilong.love.zone
[root@dns named]# cp -a  named.localhost yilong.love.zone  #-a相同属性,复制产生一个yilong.love的数据文件
[root@dns named]# vim yilong.love.zone
$TTL 1D
@       IN SOA  @ rname.invalid. (
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
        NS      @
        A       192.168.78.137
www IN A 192.168.78.199		#vip1
www IN A 192.168.78.188		#vip2

By adding two A records with the same name corresponding to two VIP addresses, load balancing of the DNS domain name is achieved and the traffic is distributed to different servers.
Test dns resolution:

[root@long ~]# ping www.yilong.love
PING www.yilong.love (192.168.78.188) 56(84) bytes of data.
64 bytes from 192.168.78.188 (192.168.78.188): icmp_seq=1 ttl=64 time=0.234 ms
64 bytes from 192.168.78.188 (192.168.78.188): icmp_seq=2 ttl=64 time=0.551 ms
^Z
[3]+  已停止               ping www.yilong.love
[root@long ~]# ping www.baidu.com
PING www.a.shifen.com (14.119.104.189) 56(84) bytes of data.
64 bytes from 14.119.104.189 (14.119.104.189): icmp_seq=1 ttl=128 time=23.5 ms
64 bytes from 14.119.104.189 (14.119.104.189): icmp_seq=2 ttl=128 time=23.7 ms
64 bytes from 14.119.104.189 (14.119.104.189): icmp_seq=3 ttl=128 time=23.5 ms
^Z
[4]+  已停止               ping www.baidu.com

7. Set up Prometheus+grafana monitoring

1. Install prometheus server

#上传下载的源码包到linux服务器
[root@prometheus ~]# mkdir /prom
[root@prometheus ~]# cd /prom
[root@prometheus prom]# ls
prometheus-2.34.0.linux-amd64.tar.gz
#解压源码包
[root@prometheus prom]# tar xf prometheus-2.34.0.linux-amd64.tar.gz
[root@prometheus prom]# ls
prometheus-2.34.0.linux-amd64  prometheus-2.34.0.linux-amd64.tar.gz
[root@prometheus prom]# mv prometheus-2.34.0.linux-amd64 prometheus
[root@prometheus prom]# ls
prometheus  prometheus-2.34.0.linux-amd64.tar.gz
#临时和永久修改PATH变量,添加prometheus的路径
[root@prometheus prometheus]# PATH=/prom/prometheus:$PATH		#临时
[root@prometheus prometheus]# cat /root/.bashrc				#永久
# .bashrc

# User specific aliases and functions

alias rm='rm -i'
alias cp='cp -i'
alias mv='mv -i'

# Source global definitions
if [ -f /etc/bashrc ]; then
	. /etc/bashrc
fi
PATH=/prom/prometheus:$PATH   #添加
#执行prometheus程序
[root@prometheus prometheus]# nohup prometheus  --config.file=/prom/prometheus/prometheus.yml &
[1] 8431
[root@prometheus prometheus]# nohup: 忽略输入并把输出追加到"nohup.out"

2. Make prometheus a service for management

[root@prometheus prometheus]# vim /usr/lib/systemd/system/prometheus.service 
[Unit]
Description=prometheus
[Service]
ExecStart=/prom/prometheus/prometheus --config.file=/prom/prometheus/prometheus.yml
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure
[Install]:
WantedBy=multi-user.target
#重新加载systemd相关的服务
[root@prometheus prometheus]# systemctl daemon-reload
[root@prometheus prometheus]#  service prometheus start
[root@prometheus system]# ps aux|grep prometheu
root       7193  2.0  4.4 782084 44752 ?        Ssl  13:16   0:00 /prom/prometheus/prometheus --config.file=/prom/prometheus/prometheus.yml
root       7201  0.0  0.0 112824   972 pts/1    S+   13:16   0:00 grep --color=auto prometheu

3. Add the exporter program in prometheus server

Add the configuration to capture data on the prometheus server, add the node node server, and store the captured data in the time series database, so that you can know where to pull the data.

[root@prometheus prometheus]# vim prometheus.yml 
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["localhost:9090"]
  #添加以下要监控的节点
  - job_name: "LB1"
    static_configs:
      - targets: ["192.168.78.134:8090"]
  - job_name: "LB2"
    static_configs:
      - targets: ["192.168.78.133:8090"]
  - job_name: "web1"
    static_configs:
      - targets: ["192.168.78.130:8090"]
  - job_name: "web2"
    static_configs:
      - targets: ["192.168.78.131:8090"]
  - job_name: "web3"
    static_configs:
      - targets: ["192.168.78.135:8090"]
[root@prometheus prometheus]# service prometheus restart
Redirecting to /bin/systemctl restart prometheus.service

4. Log in to prometheus to check whether targets are added successfully.
Insert image description here

4. Deploy grafana

Grafana is an open source data visualization and analysis platform that supports query and display of multiple data sources.

[root@prometheus grafana]# wget https://dl.grafana.com/enterprise/release/grafana-enterprise-9.1.2-1.x86_64.rpm
[root@prometheus grafana]# yum install grafana-enterprise-9.1.2-1.x86_64.rpm -y
[root@prometheus grafana]# service grafana-server start
Starting grafana-server (via systemctl):                   [  确定  ]
#设置开机启动
[root@prometheus grafana]# systemctl enable grafana-server
Created symlink from /etc/systemd/system/multi-user.target.wants/grafana-server.service to /usr/lib/systemd/system/grafana-server.service.
#查看端口号
[root@prometheus grafana]# netstat -anplut|grep grafana
tcp6       0      0 :::3000                 :::*                    LISTEN      2187/grafana-server 

Log in to view:

The default username and password are
Username admin
Password admin

Insert image description here
Configure the data source of prometheus (click Settings>Add data source>Prometheus), fill in the URL, and then save:
Insert image description here
Import the template of grafana (click import under Dashboards), fill in the template number:
Insert image description here
get the following monitoring effect:
Insert image description here

5. Stress test:

[root@long ~]# ab -n 60000 -c 20000 http://www.yilong.love/
This is ApacheBench, Version 2.3 <$Revision: 1430300 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking www.yilong.love (be patient)
Completed 6000 requests
Completed 12000 requests
Completed 18000 requests
Completed 24000 requests
Completed 30000 requests
Completed 36000 requests
Completed 42000 requests
Completed 48000 requests
Completed 54000 requests
Completed 60000 requests
Finished 60000 requests


Server Software:        nginx
Server Hostname:        www.yilong.love
Server Port:            80

Document Path:          /
Document Length:        162 bytes

Concurrency Level:      20000
Time taken for tests:   6.059 seconds
Complete requests:      60000
Failed requests:        124526
   (Connect: 0, Receive: 0, Length: 69984, Exceptions: 54542)
Write errors:           0
Non-2xx responses:      5458
Total transferred:      1899384 bytes
HTML transferred:       884196 bytes
Requests per second:    9902.93 [#/sec] (mean)
Time per request:       2019.604 [ms] (mean)
Time per request:       0.101 [ms] (mean, across all concurrent requests)
Transfer rate:          306.14 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        1  806 203.4    786    2176
Processing:    62  863 276.7    855    1701
Waiting:        0   99 316.5      0    1276
Total:       1026 1669 188.7   1645    3044

Percentage of the requests served within a certain time (ms)
  50%   1645
  66%   1735
  75%   1738
  80%   1741
  90%   1763
  95%   1813
  98%   2236
  99%   2449
 100%   3044 (longest request)

The maximum number of concurrent processing is about 2,000.
At this point, the project is completed!

Project experience:

1. Have a certain understanding of one-click deployment and installation, which is very convenient and fast, providing confidence for better learning ansible in the future; 2. Have a
deeper understanding of the use of nginx, and high-availability deployment It has a certain understanding and solution ability;
3. Learned Prometheus monitoring, which is a very basic operation and maintenance work. You can see problems in advance and provide early warning;
4. Lay the foundation for learning large-scale clusters in the future. The overall planning ability has been improved;
5. Have a certain understanding of the cooperation of many basic functional software, such as: keepalived, ansible, nginx, nfs, etc...

In short, by conducting experiments, I not only deepened my understanding of relevant theoretical knowledge, but also mastered practical skills and methods. The problems that arose in the experiment exercised my ability and thinking level in solving practical problems. At the same time, facing the problems that arose during the project helped me learn to communicate and collaborate better with my peers and teachers. Through the experiments, the valuable experience I gained will not only become my wealth, but also help me better learn relevant knowledge and relevant practical work in the future.

Guess you like

Origin blog.csdn.net/zheng_long_/article/details/130714816