Nginx configuration file and reverse proxy, load balancing, dynamic and static separation

nginx configuration file location

/usr/local/nginx/conf/nginx.conf

Nginx configuration file composition

(1) Global block

From the beginning of the configuration file to the contents of the events block, it is mainly to set some configuration instructions that affect the overall operation of the nginx server.
For example:
worker_processes 1; the
larger the value of worker_processes, the more concurrent processing can be supported, and the more worker processes.
error_log represents the storage path of nginx error log

(2) events block

The instructions involved in the events block mainly affect the network connection between the nginx server and the user. For
example:
worker_connections 1024; the maximum number of connections supported

(3) http block

http block is divided into http global block and server block
http global block:
include main.types; introduce an external file ->/main.types put a large number of media types
include vhosts/*.conf; introduce an external File -> Import the configuration file ending with .conf under the vhosts file

Server block:
listent: 80; monitor port 89
location block
root:/web/abc; send the received request to the specified directory to request resources.
index: index.html; By default, it
is the key point to find the relative file server block in the above path , mainly for the configuration of various domain names and projects, and can be used for reverse proxy operations.

Reverse proxy case

In the server, change the server_name to the current specific IP address, then add a proxy_pass to the location, write the specific server ip and port number to be proxied to, and wait for the client to initiate a request to port 80 of the nginx server, You can directly proxy to our designated reverse proxy server

// 把当前IP的80端口反向代理到当前IP的8080端口
server {
    
    
	listen	80;
	server_name  localhost;

	location / {
    
    
		proxy_pass http://127.0.0.1:8080;
	}
}

Specific Nginx location path mapping

#优先级关系(越精确,优先级越高)
(location =)  >  (location /xxx/zzz/vvv)  >   (location ^~)	>	(location ~,~*)	 > 	(location /)	

#1. = 匹配
location =/ {
    
    
	#精准匹配,主机名后面不能带任何的的字符串
}

#2. 通用匹配
location /xxx {
    
    
	#匹配所有以xxx开头的路径
}

#3. 正则匹配
location ~ /xxx {
    
    
	# 匹配所有以xxx开头的路径
}

#4. 匹配开头路径
location ^~ /xxx {
    
    
	# 匹配所有以xxx开头的路径
}
#5. ~* \.(gif|jpg|png)$ {
    
    
	# 匹配以gif或者jpg或者png结尾的路径
}

Load balancing case

Add the upstream part to the http global block, add all the servers that need load balancing, and add some rules, such as priority order, etc.
Then use proxy_pass in the serve block to specify the name of the load balancer, as follows, it can be the current load balancer. Each request will fall on each server evenly.
For specific details, please refer to the following articles summarized by others:
Nginx in-depth detailed description of the upstream distribution method.
Specifically, I will cooperate with docker to make a case, use docker to build two nginx servers and a tomcat server, map different port numbers, and let one of them nginx server to do load balancing.

# 下载并运行一个nginx服务器,映射端口号8011到80端口
$ docker run -d -p 8011:80 --name some-nginx daocloud.io/library/nginx:latest
# 在运行一个nginx容器,映射端口号8012到80端口,4bb46517cac3 为刚才运行的nginx的容器的镜像id
$ docker run -d -p 8012:80 --name some-nginx1 4bb46517cac3  
# 下载并运行一个tomcat服务器,映射端口号8082到8080端口, todo 注意tomcat的默认路径下的默认文件存在请求js和css,会多次请求服务器,如果用默认的负载均衡方式会导致负载均衡出问题,最好在tomcat下增加一个index.html文件来做测试,这样每一次访问只会请求到一台服务器,可以很直观的看到效果
$ docker run -d -p 8082:8080 --name t8 daocloud.io/library/tomcat:8.5.15-jre8
# 本地把需要做负载均衡的config配置文件先做好
$ vi default.conf
# default.conf nginx配置文件
upstream myServer{
    
    
        server 10.0.2.15:8082;  # IP 换成自己的
        server 10.0.2.15:8012;	# IP 换成自己的
}
server {
    
    
    listen       80;
    listen  [::]:80;
    server_name  localhost;

    location / {
    
    
#        root   /usr/share/nginx/html;
        proxy_pass http://myServer;
        index  index.html index.htm;
    }
}
# 把修改好的配置文件复制到docker的nginx容器(some-nginx)中
$ docker cp ./default.conf some-nginx:/etc/nginx/conf.d/default.conf
# 重启一下
$ docker restart some-nginx

At this point, the server configuration has been completed, and the results can be checked and accepted by accessing the load balancing server in the browser.

Load balancing strategy

Nginx provides us with three load balancing strategies by default:
1. Polling: The requests initiated by the client are evenly distributed to each server
2. Weight: The client's request will be allocated according to the weight value of the server. Different numbers
3.ip_hash: Based on the different ip addresses of the client who initiated the request, he will always send the request to the specified server. That is to say, if the ip address of the client's request remains the same, then the server processing the request will always be the same One

polling

The polling case is the case above, it is very simple, the following is a simple configuration description

# 需要轮询的Ip及端口号
upstream my_server{
    
    
    server ip:port;
    server ip:port;
}
server {
    
    
    listen       80;
    listen  [::]:80;
    server_name  localhost;

	location / {
    
    
        proxy_pass http://upstream名称/;
    }
}
Weights

The weight is realized by adding weight to the back of each server in the upstream. The details are as follows. After configuring the following, I re-enable the load-balanced nginx server built by docker, and then initiate a request. After five requests to the tomcat server, there will be one nginx server request.

upstream myServer{
    
    
        server 10.0.2.15:8082 weight=10;  # IP 换成自己的
        server 10.0.2.15:8012 weight=2;  # IP 换成自己的
        #server 10.0.2.15:8011;
}
server {
    
    
    listen       80;
    listen  [::]:80;
    server_name  localhost;

    #charset koi8-r;
    #access_log  /var/log/nginx/host.access.log  main;

    location / {
    
    
        proxy_pass http://myServer;
        #root   /usr/share/nginx/html;
        index  index.html index.htm;
    }
}
ip_hash

The realization of ip_hash, after adding ip_hash in the first line of upstream, the ip will be locked, and each ip will only request one server. At this time, I restart the server and after making the request, I will find that only the nginx server can be accessed.

upstream myServer{
    
    
        ip_hash;
        server 10.0.2.15:8082 weight=10;  # IP 换成自己的
        server 10.0.2.15:8012 weight=2;  # IP 换成自己的
        #server 10.0.2.15:8011;
}
server {
    
    
    listen       80;
    listen  [::]:80;
    server_name  localhost;

    #charset koi8-r;
    #access_log  /var/log/nginx/host.access.log  main;

    location / {
    
    
        proxy_pass http://myServer;
        #root   /usr/share/nginx/html;
        index  index.html index.htm;
    }
}

Dynamic and static separation

Nginx's concurrency formula:
worker_processes * worker_connections / 4|2 = Nginx's final concurrency capacity
Dynamic resource requirement/4, static resource requirement/2
Nginx improves the concurrency capacity of Nginx through dynamic and static separation, and responds to users faster

Dynamic resource agent

Nothing to say, basically it was dynamic before

#配置如下
location / {
    
    
  proxy_pass 路径;
}
Static resource agent
# nginx 配置项
server {
    
    
    listen       80;
    listen  [::]:80;
    server_name  localhost;

	# 在 /web/data 下创建html文件夹,然后把一个index.html文件放到里边
    location /html {
    
    
        root  /web/data;
        index  index.html;
    }

	#  在 /web/data 下创建img文件夹,然后把一个123.html文件放到里边
    location /img {
    
    
        root /web/data;
        autoindex on;   #代表展示静态资源的全部内容,以列表的形式展开
    }
}
<!- index.html文件内容, 直接把图片也直接放到了内容中一起请求,看是否可以直接成功 -> 
<h1>test img</h1>
<img src="http://192.168.1.113:8013/img/123.jpg"/>

All succeeded, the html static page can be accessed normally, and the pictures can also be accessed normally

Insert picture description here
We are looking at the role of "autoindex on", as follows
Insert picture description here

Guess you like

Origin blog.csdn.net/qq_15915293/article/details/107945000