Front-end ngnix deployment

1. What is nginx?

nginx official introduction:

"Nginx is a lightweight HTTP server that uses an event-driven asynchronous non-blocking processing framework, which gives it excellent IO performance and is often used for server-side reverse proxy and load balancing."

Advantages of nginx

  • Support massive high concurrency: use IO multiplexing epoll. According to official tests, Nginx can support 50,000 concurrent connections, and in actual production environments, it can support 20,000 to 40,000 concurrent connections.
  • Less memory consumption
  • Can be commercialized
  • In addition to these simple configuration files, there are many advantages, such as reverse proxy function, grayscale publishing, load balancing function, etc.

2. Installation

  • If Linux is centos, you can also install it directly using yum, which is also very convenient.
  • window
  • Install via docker (recommended).

3. Configuration file

cd /conf/nginx				进入配置文件目录
vi nginx.conf				配置文件
docker logs 镜像名			查看错误log
docker restart 镜像名		配置后重启 

1. Introduction to the structure of configuration files

In order to give everyone a simple outline, here is a brief description of the configuration file:

worker_processes  1;                			# worker进程的数量
events {
    
                                  			# 事件区块开始
    worker_connections  1024;          		# 每个worker进程支持的最大连接数
}                               			# 事件区块结束
http {
    
                               			# HTTP区块开始
    include       mime.types;         			# Nginx支持的媒体类型库文件
    default_type  application/octet-stream;            # 默认的媒体类型
    sendfile        on;       				# 开启高效传输模式
    keepalive_timeout  65;       			# 连接超时
    server {
    
                		                # 第一个Server区块开始,表示一个独立的虚拟主机站点
        listen       80;      			        # 提供服务的端口,默认80
        server_name  localhost;    			# 提供服务的域名主机名
        location / {
    
                	        	# 第一个location区块开始
            root   html;       			# 站点的根目录,相当于Nginx的安装目录
            index  index.html index.htm;       	# 默认的首页文件,多个用空格分开
        }          				        # 第一个location区块结果
        error_page   500502503504  /50x.html;          # 出现对应的http状态码时,使用50x.html回应客户
        location = /50x.html {
    
              	        # location区块开始,访问50x.html
            root   html;      		      	        # 指定对应的站点目录为html
        }
    }  
    ......

2.location matching

    #优先级1,精确匹配,根路径
    location =/ {
    
    
        return 400;
    }
 
    #优先级2,以某个字符串开头,以av开头的,优先匹配这里,区分大小写
    location ^~ /av {
    
    
       root /data/av/;
    }
 
    #优先级3,区分大小写的正则匹配,匹配/media*****路径
    location ~ /media {
    
    
          alias /data/static/;
    }
 
    #优先级4 ,不区分大小写的正则匹配,所有的****.jpg|gif|png 都走这里
    location ~* .*\.(jpg|gif|png|js|css)$ {
    
    
       root  /data/av/;
    }
 
    #优先7,通用匹配
    location / {
    
    
        return 403;
    }

4. nginx reverse proxy

1. Forward proxy

Forward proxy, "it acts as a proxy for the client", is a server located between the client and the origin server. In order to obtain content from the origin server, the client sends a request to the proxy and specifies the target (origin server). ), the proxy then forwards the request to the origin server and returns the obtained content to the client. The client must make some special settings to use the forward proxy. The purpose of the forward proxy:

  • Access previously inaccessible resources, such as Google
  • Can be cached to speed up access to resources
  • Authorize client access and authenticate online
  • The agent can record user access records (online behavior management) and hide user information from the outside.

2. Reverse proxy

Reverse proxy, "it acts as a proxy for the server", is mainly used in the case of distributed deployment of server clusters. The reverse proxy hides the server's information. The role of reverse proxy:

  • To ensure the security of the internal network, the reverse proxy is usually used as the public network access address, and the web server is the internal network.
  • Load balancing, through reverse proxy server to optimize the load of the website
    Insert image description here

3. Load balancing

The number of requests that the server receives from different clients and that the Nginx reverse proxy server receives is what we call the load. The rule that these request numbers are distributed to different servers for processing according to certain rules is a balancing rule. Therefore, the process of distributing requests received by the server according to rules is called load balancing.
Load balancing is also divided into two types: hardware load balancing and software load balancing. We are talking about software load balancing. For those who are interested in hardware load balancing, You can learn about the load balancing algorithm:

  • Polling (default, weighted polling, ip_hash)
  • Plug-ins (fair, url_hash), url_hash and ip_hash are similar. One is based on ip and the other is based on url, so I won’t introduce them in detail.

Default polling

Each request is assigned to different back-end servers one by one in chronological order. If a back-end server goes down, the faulty system can be automatically eliminated.

# constPolling 作为存放负载均衡的变量
upstream constPolling {
    
    
    server localhost:10001; 
    server localhost:10002;
}
server {
    
    
    listen 10000;
    server_name localhost;
    location / {
    
    
    proxy_pass http://constPolling; #在代理的时候接入constPolling
    proxy_redirect default;
    }
}

weighted polling

By setting the weight, the larger the value, the greater the distribution ratio and the higher the access probability. It is mainly used when the performance of each back-end server is unbalanced. The second step is to set different weights in the master-slave situation to achieve reasonable and effective utilization of host resources.

# constPolling 作为存放负载均衡的变量
upstream constPolling {
    
    
    server localhost:10001 weight=1; 
    server localhost:10002 weight=2;
}
server {
    
    
    listen 10000;
    server_name localhost;
    location / {
    
    
    proxy_pass http://constPolling; #在代理的时候接入constPolling
    proxy_redirect default;
    }
}

The greater the weight, the greater the probability of being accessed. For example, the above is the effect of access with 33.33% and 66.66% access probabilities:
localhost:10001, localhost:10002, localhost:10002, localhost:10001, localhost:10002, localhost: 10002

Each request of ip_hash
is allocated according to the hash result of the accessed IP. After such processing, each visitor has fixed access to a back-end service, configured as follows (ip_hash can be used in conjunction with weight), and can effectively solve the session sharing problem of dynamic web pages.

upstream constPolling {
    
    
       ip_hash; 
       server    localhost:10001 weight=1;
       server    localhost:10002 weight=2;
}

fair

A load balancing algorithm that I personally like to use, the fair algorithm can intelligently perform load balancing based on page size and loading time, and prioritize allocation to those with short response times.

Install the upstream_fair module and attach the fair installation tutorial.
Whichever server responds faster, the request will be allocated to that server.

upstream constPolling {
    
     
 server    localhost:10001;
 server    localhost:10002;
 fair; 
} 

5. nginx error page configuration, enable Gzip compression configuration

1. Error page configuration

When the address we access does not exist, we can perform corresponding processing based on the http status code. Let's take 404 as an example.
Insert image description here

2.Gzip compression

Gzip is a web page compression technology for web pages. After gzip compression, the page size can be reduced to 30% of the original size or even smaller. Smaller web pages will give users a better browsing experience and faster speeds. The implementation of gzip web page compression requires the support of the browser and the server.
gzip requires the support of the server and the browser at the same time. When the browser supports gzip compression, Accept-Encoding:gzip will be included in the request message, so that Nginx will send the content after hearing gzip to the browser, and add Content-Encoding:gzip to the corresponding information header to declare this It is the content after gzip, telling the browser that it must be decompressed before parsing the output. If the project is running on IE or some browsers with low compatibility, you need to check to determine whether the browser supports gzip.

server {
    
    
 
    listen 12089;
 
    index index.php index.html;
 
    error_log /var/log/nginx/error.log;
 
    access_log /var/log/nginx/access.log;
 
    root /var/www/html/gzip;
    # 开启gzip压缩
 
    gzip on;
 
    # http请求版本
 
    gzip_http_version 1.0;
 
    # 设置什么类型的文件需要压缩
 
    gzip_types text/css text/javascript application/javascript image/png image/jpeg image/gif;
 
    location / {
    
    
 
    index index.html index.htm index.php;
 
    autoindex off;
 
    }
 
}

Insert image description here

6. Comprehensive usage scenarios of nginx

  1. The same domain name specifies different project directories through different directories.
    During the development process, there is a scenario. For example, a project has multiple subsystems that need to be accessed through the same domain name and through different directories. This is also used in scenarios such as A/B Test grayscale publishing.
    For example:
    when you visit a.com/a/***, you are accessing system a.
    When you visit a.com/b/***, you are accessing system b.
    Insert image description here

  2. Automatically adapt to PC/mobile page
    Insert image description here

  3. Restrict access to Google Chrome only
    Insert image description here

  4. Front-end single page application refresh 404 problem
    Insert image description here

7. Commonly used global variables

variable meaning
$args This variable is equal to the parameter in the request line, the same as $query_string
$content length Content-length field in the request header.
$content_type Content-Type field in the request header.
$document_root The current request is for the value specified in the root directive.
$host The request host header field, otherwise the server name.
$http_user_agent Client agent information
$http_cookie Client cookie information
$limit_rate This variable can limit the connection rate.
$request_method The action requested by the client, usually GET or POST.
$remote_addr The client's IP address.
$remote_port The client's port.
$remote_user A username that has been authenticated by the Auth Basic Module.
$request_filename The currently requested file path, generated by the root or alias directive and URI request.
$scheme HTTP methods (such as http, https).
$server_protocol The protocol used by the request, usually HTTP/1.0 or HTTP/1.1.
$server_addr Server address, this value can be determined after completing a system call.
$server_name name of server.
$server_port The port number where the request arrives at the server.
$request_uri The original URI containing the request parameters, excluding the host name, such as "/foo/bar.php?arg=baz".
$uri The current URI without request parameters, $uri does not contain the host name, such as "/foo/bar.html".
$document_uri Same as $uri.

Guess you like

Origin blog.csdn.net/qq_44436509/article/details/129670933