Load balancing cluster, LVS-NAT working mode deployment

1. Overview of enterprise cluster applications

1. The composition of the cluster

Scheduler, server pool, storage server

2. The meaning of cluster

  • Cluster, cluster, cluster
  • Composed of multiple hosts, but only as a whole externally (that is, receiving user requests)

3. There are no problems with clusters

In Internet applications, as sites have higher and higher requirements for hardware performance, response speed, service stability, data reliability, etc., a single server is unable to do so.

4. Solution

  • Use expensive minicomputers and mainframes
  • Use ordinary servers to build service clusters

5. Enterprise cluster classification

According to the target difference of the cluster, it can be divided into three types

  • Load balancing cluster
  • Highly available cluster-active, standby
  • High-performance computing cluster

5.1 Load balancing cluster

  • 1. Improve the responsiveness of the application system, handle as many access requests as possible, reduce latency as the goal, and obtain high concurrency and high load (LB) overall performance
  • 2. The load distribution of LB depends on the distribution algorithm of the master node

5.2 High-availability cluster (to ensure uninterrupted business)

  • 1. Improve the reliability of the application system, reduce the interruption time as much as possible, ensure the continuity of the service, and achieve the fault tolerance of high availability (HA)
  • 2. The working mode of HA includes duplex and master-slave modes

5.3 High-performance computing cluster

  • 1. The goal is to improve the CPU computing speed of the application system, expand hardware resources and analysis capabilities, and obtain high-performance computing (HPC) capabilities equivalent to large-scale and supercomputers
  • 2. High performance relies on "distributed computing" and "parallel computing", and multiple servers are integrated through dedicated hardware and software

6. Load balancing cluster architecture

Load balancing structure

  • The first layer, load scheduler (Load Balancer or Director)
  • The second layer, the server pool (Server Pool)
  • The third time, Share Storage
    Insert picture description here

7. Analysis of load balancing cluster working mode

Load balancing clusters are currently the most commonly used cluster type in enterprises.
There are three LVS working modes for cluster load scheduling technology

  • Address Translation-NAT
  • IP tunnel-Tunnel
  • Direct routing-DR

7.1 NAT mode

Address translation

  • 1.Network Address Translation, referred to as NAT mode
  • 2. A private network structure similar to a firewall, the load scheduler serves as the gateway of all server nodes, that is, as the client's access population, and each node responds to the client's access exit
  • 3. The server node uses a private IP address and is located on the same physical network as the load scheduler. The security is better than the other two methods

Advantage: Safety
Disadvantage: The load is lower than the other two

7.2 TUN mode

IP tunnel

  • 1. IP Tunnel, referred to as TUN mode

  • 2. With an open network structure, the load scheduler is only used as the client's access portal, and each node directly responds to the client through its own Internet connection, instead of going through the load scheduler

  • 3. The server nodes are scattered in different locations in the Internet, have independent public IP addresses, and communicate with the load scheduler through a dedicated IP tunnel.

    Features: Tunnel has an independent public network IP address
    to communicate with the load scheduler through a dedicated IP tunnel

7.3 DR mode

Direct routing

  • 1. Direct Routing, referred to as DR mode
  • 2. Using a semi-open network structure, similar to the structure of the TUN model, but the nodes are not scattered in various places, but are located in a physical network with the scheduler
  • 3. The load scheduler is connected to each node server through the local network, without the need to establish a dedicated IP tunnel

Similarity: All web nodes directly respond to the client.
Differences:

  • 1. Each node of the tunnel has an independent public network address

  • Each DR node does not have an independent public network address

  • 2.Tunnel each web node and scheduler IP tunnel communication

  • DR web nodes communicate with local area network

  • 3. Tunnelweb node responds directly

  • The .DRweb node needs to respond through the router

Two, LVS overview

1. Overview of LVS Virtual Server

  • Load balancing solution for Linux kernel
  • Founded by Dr. Zhang Wensong from my country in May 1998

2. Four load scheduling algorithms of LVS

2.1 Polling (rr)

Allocate the received access requests to each node (real server) in the cluster in turn, and treat each server equally, regardless of the actual number of connections and system load of the server

2.2 Weighted round-robin (wrr)

  • 1. Distribute requests according to the weight value set by the scheduler. Nodes with higher weight values ​​get the task, and the more requests are allocated
  • 2. Ensure that the server with strong performance bears more access traffic

2.3 The least connection (lc)

Assign according to the number of connections established by the real server, and prioritize the access requests received to the node with the least number of connections

2.4 Weighted least connection (wlc)

  • When the performance of server nodes differ greatly, the weight can be automatically adjusted for the real server
  • Nodes with higher performance will bear a greater proportion of active connection load

3. Use ipvsadm tool

LVS cluster creation and management

  • Create virtual server-add and delete server nodes-view cluster and node status-save load, allocate strategy

Three, LVS-NAT deployment

LVS load balancing cluster-address translation mode (LVS-NAT)

1. Environment

As the gateway of the Web server pool, the LVS scheduler has two LVS network cards (connected to the internal network and connected to the external network client), respectively connected to the internal and external networks, using the polling (rr) algorithm

2. Deployment steps

  • 1. Load the ip_vs module and install the ipvsadm tool
  • 2. Turn on routing and forwarding
  • 3. Create a new LVS virtual server and add a node server
  • 4. Configure the node server
    • Build a test website
    • Mount NFS shared storage
    • Create a test page
  • 5. Save the rules and test

3. Detailed equipment configuration

调度服务器一台:lvs
ip:192.168.1.10(内网)
    192.168.2.10(外网)
        
Web服务器两台:
ip:192.168.1.11(第1台)
ip:192.168.1.12(第2台)

NFS共享服务器:
ip:192.168.1.13

web1, web2 gateway points to the intranet address of lvs

systemctl  stop  firewalld
setenforce 0

Insert picture description here
Insert picture description here

Test whether lvs can communicate with 2 web servers, nfs, client
Insert picture description here

Turn on the routing function on the lvs scheduler

[root@lvs ~]# vi /etc/sysctl.conf 
添加:
net.ipv4.ip_forward=1

Insert picture description here
Determine the kernel's support for lvs

[root@lvs ~]# modprobe ip_vs       #对ip_vs的探测加载
[root@lvs ~]# cat /proc/net/ip_vs    #查看基本信息
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

install software

[root@lvs ~]# yum -y install ipvsadm

Configure nat script

[root@lvs ~]# vi nat.sh       

#!/bin/bash
ipvsadm -C      #清除内核虚拟服务器表中的所有记录
ipvsadm -A -t 192.168.2.10:80 -s rr      #创建虚拟服务器
ipvsadm -a -t 192.168.2.10:80 -r 192.168.1.11:80 -m   #添加服务器节点
ipvsadm -a -t 192.168.2.10:80 -r 192.168.1.12:80 -m   #添加服务器节点
ipvsadm -Ln    #查看节点状态,加个“-n”将以数字形式显示地址,端口信息

Option note:

-A”表示添加虚拟服务器
“-a”表示添加真实服务器
“-t”用来指定VIP地址及TCP端口
“-r”用来指定RIP地址及TCP端口
“-s”用来指定负载调度算法——rr(轮询),wrr(加权轮询),lc(最少连接),wlc(加权最少连接)
“-m”表示使用NAT群集模式(“-g”是DR模式,“-i”是TUN模式)

On web1, web2 server

[root@web1 ~]# yum -y install httpd
[root@web2 ~]# yum -y install httpd

On nfs

[root@nfs ~]# yum -y install nfs-utils rpcbind

Create web content in ngf

[root@nfs ~]# mkdir /opt/web1
[root@nfs ~]# mkdir /opt/web2
[root@nfs ~]# echo "<h1>This is Web1</h1>" > /opt/web1/index.html
[root@nfs ~]# echo "<h1>This is Web2</h1>" > /opt/web2/index.html

Edit configuration file

[root@nfs ~]# vi /etc/exports
[root@nfs ~]# systemctl restart nfs
[root@nfs ~]# systemctl restart rpcbind
[root@nfs ~]# showmount -e     #查看本地共享目录情况

添加:
/opt/web1 192.168.1.11/32 (ro)
/opt/web2 192.168.1.12/32 (ro)


Mount on web1, web2

[root@web1 ~]# mount 192.168.1.13:/opt/web1 /var/www/html
[root@web1 ~]# df -Th
[root@web2 ~]# mount 192.168.1.13:/opt/web2 /var/www/html
[root@web2 ~]# df -Th

Insert picture description here
Insert picture description here
Start and view httpd service

[root@web1 ~]# systemctl start httpd   #启动服务
[root@web1 ~]# systemctl status httpd  #查看服务状态
[root@web1 ~]# netstat -anpt | grep httpd  #查看端口状态
[root@web2 ~]# systemctl start httpd   #启动服务
[root@web2 ~]# systemctl status httpd  #查看服务状态
[root@web2 ~]# netstat -anpt | grep httpd  #查看端口状态

Insert picture description here
Test page

[root@web1~]# curl http://localhost
<h1>This is Web1</h1>
[root@web2 ~]# curl http://localhost
<h1>This is Web2</h1>

Test two web (node) servers on the lvs scheduler

192.168.1.11
192.168.1.12

Insert picture description here
Insert picture description here
Execute script on lvs

[root@lvs ~]# vi nat.sh
[root@lvs ~]# chmod +x nat.sh
[root@lvs ~]# ./nat.sh   运行脚本

Insert picture description here
Test access
192.168.2.10 on the client and
Insert picture description here
refresh it will display the second web server page
Insert picture description here

[root@lvs ~]# ipvsadm -Lnc      #显示真实调度器调度的情况、各个细明情况、连接时长
[root@lvs ~]# ipvsadm -Ln       #查看连接情况、正在连接、未连接情况、数量 

Insert picture description here

Guess you like

Origin blog.csdn.net/F2001523/article/details/110878088