We've already understand the basic concepts of load balancing, we can draw a map to see all requests coming from the Internet, will start with N ginx load balancing reception, and then be distributed to a number of back-end servers based on specified policies, as to what policy it will be described later, specifically as shown below

 1.png 

    Now most of its business to B / S structure of the way the site exists, then we'll try to do load balancing a website, the goal we want to achieve:

1, there are two back-end servers, each can provide services and the same time offer;

2, when the back end of one of the servers fails, without disrupting user access;

    Through this experiment, our website will have high availability, as long as not all the damage back-end server, users always have access to the server, after all, is large, the views N ginx will distribute the traffic to different servers bearer, not limited to a single server performance deficiencies lead to poor user experience issues.

   

HTTP Load Balancing

Objective: to achieve load balancing by the rear end of the two servers Nginx, and round-robin fashion to distribute requests to different servers


Character

Machine name

IP addresses

Nginx

host1

192.168.30.130

Web Server

Host2

192.168.30.131

Web Server

Host3

192.168.30.132

 

Http.conf create a new profile, we will use these files to achieve the goal of this experiment

vim /etc/nginx/conf.d/http.conf

Copy paste the following into the configuration file

upstream httptest {

    server 192.168.30.131:80    weight=1;

    server 192.168.30.132:80    weight=1;

}

server {

    listen 80;

    server_name www.example.com;

    access_log /var/log/nginx/httptest;

    error_log /var/log/nginx/errortest;

    location / {

    index index.html;

    proxy_pass http://httptest;

}

}

具体配置文件效果如下

2.png 

测试一下配置文件是否有问题

nginx -t

3.png

启动nginx服务并开启防火墙

systemctl restart nginx

firewall-cmd --add-port=80/tcp

firewall-cmd --add-port=80/tcp --permanent

 

配置文件上第一行的upstream 是一个反向代理模块,httptest是字符串,下方的server 192这两行是真实的服务器地址,真正的网站是由他们提供的,我们来测试一下,根据server_name这一行来看,将会响应这个请求头,我们需要在Host2,Host3服务器上做以下操作,生成网站内容,稍后看效果

Host2上执行以下命令

yum install httpd -y

echo lixiaohui > /var/www/html/index.html

systemctl restart httpd

firewall-cmd --add-port=80/tcp

firewall-cmd --add-port=80/tcp --permanent

Host3上执行以下命令

yum install httpd -y

echo lixiaohui222222222 > /var/www/html/index.html

systemctl restart httpd

firewall-cmd --add-port=80/tcp

firewall-cmd --add-port=80/tcp --permanent

 

The above command is installed on both the server apache, and generate a different page content, we come try to access it - by nginx, we should pay attention to, oh, if you want to use windows to access, you need Windows plus local hosts file

 

    We can see through the experiment, along with the user's access, Nginx our requests were processed on two servers, greatly enhance the server's load capacity, but also shields the access to a single server to the user's fault interrupt problems, enhance load capacity, while also reducing access interruption.

Next we began to study TCP and UDP load balancing approach, so stay tuned Oh ~