Regarding the generation, use, socket programming verification, and weighted round-robin algorithm of Nginx

Regarding the generation, use, socket programming verification, and weighted round-robin algorithm of Nginx

Source address: https://github.com/duchenlong/linux-text/tree/master/socket

Preface-The Origin of Nginx

The main reason for why there is Nginx is because the demand is increasing.

At the beginning, when we deployed the server, we wrote a server-side program directly, and then we ran it to hang it in the background. In this case, the relationship between the client and the server is多对一
Insert picture description here

Slowly, if more and more people access the server, it will cause such a situation

Insert picture description here
If there are more users accessing our server at the same time, it will increase the load of the server, and the following situation will occur
Insert picture description here

In 多对一the structure, upgrade again, can there be other servers to share the pressure brought by some users' access?

In order to solve the problem of excessive server pressure caused by multiple users accessing the server at the same time, proceed 横向扩展and add a few more servers

When there are multiple users requesting access, the logic of processing the request is placed in other servers, which avoids the need for one server to block processing multiple requests, so multiple user requests can be processed in parallel to reduce the load on the server

Intuitively, if you want to implement this mode, each user has to guide the address of each server, then this is a 多对多structure,
Insert picture description here
but this structure is a bit complicated for users, just to access a project, it has to reach all The URL of the server is obviously unscientific and unsafe.

There is nothing that cannot be solved by adding another layer. If there is, add another layer.

As a result, the following proxy server model appeared. When a user visits, he only needs to visit a unique website, and then the proxy server is responsible for distributing these requests to different servers for processing.

In this way, 多对多the structure can be transformed into the connection between the user and the proxy server, and between 多对一,一对多the proxy server and the server , which reduces the connection between the user and the server.
Insert picture description here
Then the intermediate proxy server needs to realize 反向代理the function, that is, there is such a request. Here, the proxy server needs to know which server can handle the request, then send the request to this server for processing, and then return it to the user through the proxy server after processing.

In addition, also you need to have 负载均衡the ability, and Nginxis one of the functions of the software can be achieved.

Introduction to Nginx

Baidu Encyclopedia

Nginx (engine x) is a high-performance HTTP and reverse proxy web server, and also provides IMAP/POP3/SMTP services.

Nginx was developed by Igor Sesoyev for the second most visited site in Russia, Rambler.ru (Russian: Рамблер). The first public version 0.1.0 was released on October 4, 2004.

It releases the source code in the form of a BSD-like license, and is known for its stability, rich feature set, sample configuration files, and low system resource consumption. On June 1, 2011, nginx 1.0.4 was released.

Nginx is a lightweight web server/reverse proxy server and email (IMAP/POP3) proxy server, issued under the BSD-like protocol.

Features:

  1. Occupies little memory
  2. Strong concurrency

In fact, nginx's concurrency capability performs better in the same type of web server. The users of the nginx website in mainland China include: Baidu, JD, Sina, NetEase, Tencent, Taobao, etc.

The role of Nginx

  1. HTTP proxy, reverse proxy

Before introducing reverse proxy, there is a concept of forward proxy . When we visit foreign websites, the access speed will be relatively slow. The usual practice is to open a VPN on our computer, and the VPN is a forward proxy server.
Insert picture description here

In other words, in some places in our country, it may be slower to directly access foreign servers, but in some places, it is faster to access foreign servers. You can set up a transit server in this place to help us access foreign websites. , And then return the response result to us through the proxy server, and you can quickly visit foreign websites.

The forward proxy can be said to be an active behavior of our client itself, while the reverse proxy is a way of the server side. It is analogous to the process of visiting a website when we visit a website
Insert picture description here
in the browser. After going through the network, Our request will arrive at a proxy server. The proxy server sends the request to a designated server for processing according to the processing capacity of our background server. After processing, it returns to the proxy server, and then returns to us through the network. result
Insert picture description here

  1. Load balancing

In Nginx, there are two strategies for load balancing, built-in strategy and extended strategy

The built-in strategies mainly include:

  • polling

In the polling strategy, the way that the servers in our server process requests is sequential processing. Each server is assigned a request in turn. When assigned to the last server, the next request is processed from the first server. restart
Insert picture description here

  • Weighted polling

In the weighted round-robin strategy, the problem solved is that our back-end servers may have different processing capabilities. Some servers have good performance and can handle 3 requests, but some servers have poor performance and can only handle 1 request.

In this way, if you use the polling method, for those servers with poor processing capabilities, it is likely to be blocked. Then there will be a weighted polling method, which can assign a weight to each server, and then the subsequent requests will be based on each The weights of the servers are assigned sequentially.
Insert picture description here

  • ip hash

Perform a hash operation on the client's ip, and send the client's ip request to a fixed server according to the result of the hash operation. In other words, every time a client visits, a fixed server handles the request
Insert picture description here

installation

You can download the Nginx installation package on the official website
Insert picture description here

Under Windows

After downloading, unzip, the following directory will appear.
Insert picture description here
Open the Nginx service, just run the nginx.exeprogram in the command line interface (clicking the visualization icon directly has no effect

Insert picture description here
Enter localhost:80or enter in the browser 127.0.0.1:80to access (the browser will add port 80 by default, so you don’t need to add port 80 information manually
Insert picture description here

Under Linux

Download the corresponding installation package first, and then Xshelltransfer it to the Linux system, enter the following command

# 解压
 	tar -zxvf nginx-1.18.0.tar.gz
# 进入 nginx-1.18.0 目录中
	cd nginx-1.18.0
# 运行 configure 程序
	./configure
# 安装
	make
# 如果权限不够,可以使用sudo ,root用户
	make install 

When the configure program is executed, the following errors may occur, just need to install the corresponding

Insert picture description here

Insert picture description here

sudo yum -y install pcre-devel
sudo yum -y install zlib-devel

You can use the whereiscommand to check whether the installation is successful

[duchlong@localhost nginx-1.18.0]$ whereis nginx
nginx: /usr/local/nginx

To run manually Nginx, enter the /usr/local/nginxdirectory first,
Insert picture description here
then enter the sbindirectory, and run the nginxprogram (if there is a permission problem, you can add sudo, and then run

Insert picture description here

Nginx startup

Before starting Nginx, one thing to note is that its default port number is 80, so you must ensure that the corresponding port number is open.

If it is in a virtual machine, you can turn off the firewall, or open the designated port separately

# 查看firewalld 服务状态
systemctl status firewalld

# 查看firewalld 状态
firewall-cmd --state

# 开启 firewalld 服务
service firewalld start

# 重启 firewalld 服务
service firewalld restart

# 关闭 firewalld 服务
service firewalld stop

# 查看防火墙规则
firewall-cmd --list-all 

# 查询端口是否开放
firewall-cmd --query-port=8080/tcp

# 开放80端口
firewall-cmd --permanent --add-port=80/tcp

# 移除80端口
firewall-cmd --permanent --remove-port=80/tcp

#重启防火墙(修改配置后要重启防火墙
firewall-cmd --reload

Parameter explanation

  1. firwall-cmd: a tool for operating firewall provided by Linux
  2. -Permanent: indicates that the setting is persistent
  3. –Add-port: indicates the added port
  4. --Remove-port: means to remove the port

If it is in the cloud server, you need to configure the security group and open port 80 (or a custom port
Insert picture description here

# 进入Nginx 的运行目录
cd /usr/local/nginx/sbin

# 启动 ( 出现权限问题,可以加上 sudo 
./nginx # sudo ./nginx

# 停止
./nginx -s stop

# 安全退出
./nginx -s quit

# 重新加载配置文件
./nginx -s reload

# 查看Nginx进程
ps aux | grep nginx

The address where the Nginx configuration file is located is:, /usr/local/nginx/confin the configuration file we can modify some of the Nginx configuration (the default port number, the number of processes, the number of connections that can be maintained... etc.

After modifying the configuration file, you need to reload the configuration file, otherwise it will not take effect
Insert picture description here

Configuration file

Reference rookie tutorial

The Nginx configuration file is divided into three parts:

  1. Global configuration , configure instructions that affect nginx's global

Generally there are user groups running nginx server, nginx process pid storage path, log storage path, configuration file introduction, number of worker processes allowed to be generated, etc.
Insert picture description here

  1. events block : the configuration affects the nginx server or the network connection with the user

The maximum number of connections for each process, which event-driven model is selected to process connection requests, whether to accept multiple network connections at the same time, open multiple network connection serialization, etc.
Insert picture description here

  1. http block : can nest multiple servers, configure proxy, cache, log definition and most of the functions and configuration of third-party modules

Such as file import, mime-type definition, log customization, whether to use sendfile to transfer files, connection timeout time, number of single connection requests, etc.

Insert picture description here

Configuration of the server block

Insert picture description here

  • listen , which represents the port number that Nginx listens on by default
  • server_name , which represents the listening address
  • location + url , indicates the requested url filtering, supports regular matching

Configure reverse proxy for http service

Insert picture description here

  • weight, the weight of server access
  • backuo, standby server

Simple use of Nginx + verification

First, in the configuration file, configure the server address of the reverse proxy, set the weight, and add a backup server.
Insert picture description here
Our expected result is that when the client visits for the first time, it first visits the first server 19998port, the second and third visits to 19999the server of the port, and then the fourth visit to the first server. . . Reincarnation

The server is socketestablished using sockets, you can refer to the previous blog
TCP protocol communication applet and function interface
http protocol to simply implement the demo of using ip to access html web pages

Insert picture description here

  • First visit

Insert picture description here

  • Second visit

Insert picture description here

  • Third visit

Insert picture description here

  • Fourth visit

Insert picture description here

  • Fifth visit

Insert picture description here

  • Sixth visit

Insert picture description here

  • After both servers are down (after terminating the processes of the two servers

Insert picture description here
During the access of the standby server, as long as there are other servers that are not down, the standby server will not start

Finally, we can see that the distribution of weighted servers in Nginx is not the same as our expected results. He considered a problem: if the weight of the back-end server is 1:2:3, then our ordinary polling distribution method is ABBCCC, but if it is CCBACBIs it better to allocate this way? It avoids the situation where one server continuously processes multiple requests

Nginx's round-robin scheduling mechanism

For each back-end server, its default value is 0

  1. Current value of each server + own weight = own value this time
  2. Take the largest server among the current values, and then subtract the sum of the weights of all nodes
  3. Repeat step 1 until the end of the round

Simulate the weighted polling process, his weight is A: B: C = 1: 2: 3 A:B:C = 1: 2: 3A:B:C=1:2:3 , the initial value is0: 0: 0 0:0:00:0:0

Rounds Current value Selected server Value after selection
1 1 2 3 C 1 2 -3
2 2 4 0 B 2 -2 0
3 3 0 3 A -3 0 3
4 -2 2 6 C -2 2 0
5 -1 4 3 B -1 -2 3
6 0 0 6 C 0 0 0

Very cleverly, after a round, the values ​​of all servers have become 0, which is the same as the initial value

Guess you like

Origin blog.csdn.net/duchenlong/article/details/113956980