Nginx-Scenario Practice

Nginx-Scenario Practice

1. Nginx as a static resource web service

1. Dynamic resources and static resources

If the page requested by the client is a static web page, the server will directly respond to the client with the content of the static web page. If the client requests a dynamic web page, the server needs to first replace the dynamic web page with a static web page, and then respond the converted static web page to the client

Several types of static resources

  • Browser rendering: HTML, CSS, JAVASCRIPT
  • Image: JPEG, GIF, PNG...
  • Video: FLV, MPEG...
  • File: TXT, etc. any download file

2. CDN (Content Delivery Network) Content Delivery Network

The basic idea is to avoid bottlenecks and links that may affect the speed and stability of data transmission on the Internet as much as possible, so that content transmission is faster and more stable. By placing node servers everywhere in the network to form a layer of intelligent virtual network on the basis of the existing Internet, the CDN system can real-time based on network traffic and the connection and load status of each node, as well as the distance to the user and response time Such comprehensive information redirects the user's request to the service node closest to the user. Its purpose is to enable users to obtain the desired content nearby, solve the situation of Internet network congestion, and improve the response speed of users visiting websites.

3. Configuration syntax

  1. sendfile (file read)
    • Configuration syntax: sendfile on|off;
    • Default: none
    • Context:http,server,location,if in location
  2. tcp_nopush (when sendfile is enabled, the transmission efficiency of network packets is improved)
    • Configuration syntax: tcp_nopush on|off;
    • Default: none
    • Context:http,server,location
  3. tcp_nodelay (under keepalive connection, improve the real-time transmission of network packets)
    • Configuration syntax: tcp_nodely on|off;
    • Default: none
    • Context:http,server,location
  4. gzip (compression)
    • Configuration syntax: gzip on|off;
    • Default: none
    • Context:http,server,location,if in location
  5. gizp_comp_level (compression ratio)
    • Configuration syntax: gzip_comp_level level;
    • default: none;
    • Context:http,server,location
  6. gzip_http_version (gzip's http version)
    • Configuration syntax: gzip_http_version 1.0|1.1;
    • Default: none
    • Context:http,server,location
  7. gzip_static (read-ahead gzip function)
    • Configuration syntax: gzip_static on|off|always;
    • Default: gzip_static off;
    • Context:http,server,location

4. Browser cache

The caching mechanism defined by the HTTP protocol (such as: Expires; Cache-control, etc.)

  • Browser no cache:
    • Browser request → no cache → request WEB server → request response, negotiation → rendering
  • Client has cache
    • Browser request→has cache→check expires→render
  • Check Expiration Mechanism
    | Check Method | Corresponding Header Information |
    | :-------- | :---------:|
    | Check Expiration | Expires, Cache-Control(max-age ) |
    | Etag header information verification in the protocol | Etag |
    | Last-Modified information verification | Last-Modified |

1st request:
Alt text
2nd request:
Alt text

  • expires(response的headers添加Cache-Control、Expires)
    • Configuration syntax: expires [modified] time; expires epoch |max |off;
    • Default: expires off;
    • Context:http,server,location,if in location

5. Cross-site access

How does Nginx enable cross-site access? Access-Controller-Allow-Origin

  • add_header
    • Configuration syntax: add_header name value [always];
    • Default: none
    • Context:http,server,location,if in location

      The name can be Access-Controller-Allow-Origin and Access-Controller-Allow-Method

6. Anti-theft chain

Based on http_refer anti-leech configuration module

  • Configuration syntax: valid_referers none|blocked|server_names|string...;
  • Default: none
  • Context:server,location

    valid_referers none blocked IP
    if($invalid_referer) {
    return 403;
    }

    Reminder : You can use curl to test the configured anti-leech (curl -e " http:www.baidu.com " -I IP)
    ***

2. Nginx as a proxy service

  • forward proxy
    • The object is the client (for example, if you want to access the external network, set the proxy server to the proxy address, the client can access any website)
  • reverse proxy
    • The object is the server (you don't need to care which server you are accessing, the reverse proxy is placed on the server. The reverse proxy will help us process the request)
  • proxy_pass
    • Configuration syntax: proxy_pass URL;
    • Default: none
    • Context:location,if in location,limit_except

Some syntax additions for other proxies :

  • proxy_buffering (buffer)
    • Syntax configuration: proxy_buffering on | off;
    • default: none;
    • Context:http,server,location
    • 扩展:proxy_buffer_size、proxy_buffers、proxy_busy_buffers_size
  • proxy_redirect (jump redirect)
    • 配置语法:proxy_redirect default;proxy_redirect off;proxy_redirect redirect replacement;
    • Default: none
    • Context:http,server,location
  • proxy_set_header (header information)
    • Configuration syntax: proxy_set_header file value;
    • 默认:proxy_set_header Host $proxy_host;proxy_set_header Connection close;
    • Context:http,server,location
    • 扩展:proxy_hide_header、proxy_set_body
  • proxy_connect_timeout (timeout)
    • Configuration syntax: proxy_connect_timeout time;
    • Default: none
    • Context:http,server,location
    • 扩展:proxy_read_timeout、proxy_send_timeout

      Example in configuration file:

proxy_pass http://127.0.0.1:8080;
proxy_redirect default;

proxy_set_header HOST $http_host;
proxy_set_header X-Real-IP $remote_addr;

proxy_connect_timeout 30;
proxy_send_timeout 60;
proxy_read_timeout 60;

proxy_buffer_size 32k;
proxy_buffering on;
proxy_buffers 4 128k;
proxy_busy_buffers_size 256k;
proxy_max_temp_file_size 256k;

3. Nginx as a load balancing service

Load balancing : Based on the existing network structure, it provides a cheap, effective and transparent method to expand the bandwidth of network devices and servers, increase throughput, strengthen network data processing capabilities, and improve network flexibility and availability.
Load balancing, the English name is Load Balance, means that it is allocated to multiple operation units for execution, such as Web servers, FTP servers, enterprise key application servers and other mission-critical servers, so as to complete work tasks together.

  • upstream
    • Configuration syntax: upstream name {...}
    • Default: none
    • Context:http

Simple configuration example :

upstream ronaldo {
        server ip:port [param];
        server ip:port [param];
        server ip:port [param];
}
server {
    location / {
        proxy_pass http://ronaldo;
    }
}

param parameter explanation:

param significance
down The current server does not participate in load balancing temporarily
weight=num Weight, the greater the num, the greater the probability of polling
backup Reserved backup server
max_fails The number of failed requests allowed
fail_timeout After max_fails failure, the service pause time (default is 10s)
max_conns Limit the maximum number of connections received

Scheduling Algorithm:

algorithm significance
polling Distributed to different backend servers one by one in chronological order
weighted round robin The larger the weight value, the higher the access probability assigned
ip_hash Each request is allocated according to the hash result of the access IP, so that the same backend server is fixedly accessed from the same IP
least_conn Minimum number of connections, whichever server has fewer connections will be distributed
url_hash The request is allocated according to the hash result of the accessed URL, and each URL is directed to the same backend server
hash key value hash custom key
  • ip_hash:
    • Just add **ip_hash;** to the upstream
    • Defect: If you go through the proxy, then remote_addr is not the real ip of the user
  • url_hash (available after version 1.7.2):
    • Configuration syntax: hash key [consistent];
    • Default: none
    • Context:upstream

      The key can be $request_uri, hashed according to the url

Fourth, Nginx as a cache service

1. The type of cache

  • Server-side caching. Example: memcache, reids
  • Proxy cache. Example: Nginx caches server-side data
  • Client cache.

    客户端->Nginx: 1、请求数据a
    Nginx->服务端: 2、请求数据a
    服务端->Nginx: 3、返回数据a
    Nginx->客户端: 4、返回数据a
    客户端->Nginx: 1、请求数据a
    Nginx->客户端: 2、返回数据a

2. Common cache configuration

  • proxy_cache_path
    • 配置语法proxy_cache_path path [levels=levels] [use_temp_path=on|off] keys_zone=name:size [inactive=time] [max_size] [use_temp_path]...
    • Default: none
    • Context:http
  • proxy_cache
    • Configuration syntax: proxy_cache zone | off;
    • Default: proxy_cache off;
    • Context:http,server,location
  • proxy_cache_valid (cache expiration period)
    • Configuration syntax: proxy_cache_valid [code...] time
    • Default: none
    • Context:http、server、location
  • proxy_cache_key (cached dimension)
    • Configuration syntax: proxy_cache_key string;
    • Default: proxy_cache_key $scheme$proxy_host$request_uri;
    • Context:http、server、location

Common configuration:

proxy_cache_path cache_path levels=1:2 keys_zone=key_name:10m max_size=10g inactive=60m use_temp_path=off;

server {
    loaction / {
        proxy_pass http://ronaldo;
        proxy_cache key_name;
        proxy_cache_valid 200 304 12h;
        proxy_cache_valid any 10m;
        proxy_cache_key $host$uri$is_args$args;
        add_header Nginx-Cache "$upstream_cache_status";

        proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
    }
}

2. Clear the specified cache

  • rm -rf cache directory contents
  • Third-party extension module: ngx_cache_purge

3. How to make some pages not cached

  • proxy_no_cache
    • Configuration syntax: proxy_no_cache string ...;
    • Default: none
    • Context:http,server,location

Simple example

if ($request_uri ~ ^/(url_3|login|register|password\/reset)) {
    set $cookie_nocache 1;
}

location / {
    proxy_no_cache $cookie_nocache;
}

4. Large file fragmentation request

  • slice
    • Syntax configuration: slice size;
    • Default: slice 0;
    • Context:http、server,location

Advantages: The data received by each sub-request will form a separate file, one request is broken, and other requests are not affected.
Disadvantage: When the file is large or the slice is small, it may cause the file descriptor exhaustion wait situation.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324676614&siteId=291194637