Practical Nginx minimalist tutorial, covering common scenarios

Overview

What is Nginx?

Nginx (engine x) is a lightweight web server, reverse proxy server and email (IMAP/POP3) proxy server.

What is a reverse proxy?

Reverse Proxy refers to the use of a proxy server to accept connection requests on the internet, and then forward the request to a server on the internal network, and return the result from the server to the client requesting the connection on the internet. At this time, the proxy server acts as a reverse proxy server externally.

Insert picture description here

use

The use of nginx is relatively simple, just a few commands.

The commonly used commands are as follows:

nginx -s stop       快速关闭Nginx,可能不保存相关信息,并迅速终止web服务。
nginx -s quit       平稳关闭Nginx,保存相关信息,有安排的结束web服务。
nginx -s reload     因改变了Nginx相关配置,需要重新加载配置而重载。
nginx -s reopen     重新打开日志文件。
nginx -c filename   为 Nginx 指定一个配置文件,来代替缺省的。
nginx -t            不运行,而仅仅测试配置文件。nginx 将检查配置文件的语法的正确性,并尝试打开配置文件中所引用到的文件。
nginx -v            显示 nginx 的版本。
nginx -V            显示 nginx 的版本,编译器版本和配置参数。

If you don't want to type the command every time, you can add a new startup batch file startup.bat in the nginx installation directory and double-click to run it. The content is as follows:

@echo off
rem 如果启动前已经启动nginx并记录下pid文件,会kill指定进程
nginx.exe -s stop

rem 测试配置文件语法正确性
nginx.exe -t -c conf/nginx.conf

rem 显示版本信息
nginx.exe -v

rem 按照指定配置去启动nginx
nginx.exe -c conf/nginx.conf

If it is running under Linux, writing a shell script is similar.

Nginx configuration combat

I always think that the configuration of various development tools should be described in combination with actual combat, which will make it easier for people to understand.

Let's achieve a small goal first: regardless of complex configuration, just complete an http reverse proxy.

The nginx.conf configuration file is as follows:

Note: conf / nginx.conf is the default configuration file of nginx. You can also use nginx -c to specify your configuration file

#运行用户
#user somebody;

#启动进程,通常设置成和cpu的数量相等
worker_processes  1;

#全局错误日志
error_log  D:/Tools/nginx-1.10.1/logs/error.log;
error_log  D:/Tools/nginx-1.10.1/logs/notice.log  notice;
error_log  D:/Tools/nginx-1.10.1/logs/info.log  info;

#PID文件,记录当前启动的nginx的进程ID
pid        D:/Tools/nginx-1.10.1/logs/nginx.pid;

#工作模式及连接数上限
events {
    
    
    worker_connections 1024;    #单个后台worker process进程的最大并发链接数
}

#设定http服务器,利用它的反向代理功能提供负载均衡支持
http {
    
    
    #设定mime类型(邮件支持类型),类型由mime.types文件定义
    include       D:/Tools/nginx-1.10.1/conf/mime.types;
    default_type  application/octet-stream;

    #设定日志
    log_format  main  '[$remote_addr] - [$remote_user] [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log    D:/Tools/nginx-1.10.1/logs/access.log main;
    rewrite_log     on;

    #sendfile 指令指定 nginx 是否调用 sendfile 函数(zero copy 方式)来输出文件,对于普通应用,
    #必须设为 on,如果用来进行下载等应用磁盘IO重负载应用,可设置为 off,以平衡磁盘与网络I/O处理速度,降低系统的uptime.
    sendfile        on;
    #tcp_nopush     on;

    #连接超时时间
    keepalive_timeout  120;
    tcp_nodelay        on;

    #gzip压缩开关
    #gzip  on;

    #设定实际的服务器列表
    upstream zp_server1{
    
    
        server 127.0.0.1:8089;
    }

    #HTTP服务器
    server {
    
    
        #监听80端口,80端口是知名端口号,用于HTTP协议
        listen       80;

        #定义使用www.xx.com访问
        server_name  www.helloworld.com;

        #首页
        index index.html

        #指向webapp的目录
        root D:\01_Workspace\Project\github\zp\SpringNotes\spring-security\spring-shiro\src\main\webapp;

        #编码格式
        charset utf-8;

        #代理配置参数
        proxy_connect_timeout 180;
        proxy_send_timeout 180;
        proxy_read_timeout 180;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarder-For $remote_addr;

        #反向代理的路径(和upstream绑定),location 后面设置映射的路径
        location / {
    
    
            proxy_pass http://zp_server1;
        }

        #静态文件,nginx自己处理
        location ~ ^/(images|javascript|js|css|flash|media|static)/ {
    
    
            root D:\01_Workspace\Project\github\zp\SpringNotes\spring-security\spring-shiro\src\main\webapp\views;
            #过期30天,静态文件不怎么更新,过期可以设大一点,如果频繁更新,则可以设置得小一点。
            expires 30d;
        }

        #设定查看Nginx状态的地址
        location /NginxStatus {
    
    
            stub_status           on;
            access_log            on;
            auth_basic            "NginxStatus";
            auth_basic_user_file  conf/htpasswd;
        }

        #禁止访问 .htxxx 文件
        location ~ /\.ht {
    
    
            deny all;
        }

        #错误处理页面(可选择性配置)
        #error_page   404              /404.html;
        #error_page   500 502 503 504  /50x.html;
        #location = /50x.html {
    
    
        #    root   html;
        #}
    }
}

Well, let's try it:

Start the webapp, pay attention to start the binding end
Well, let's try it:

Start the webapp, and note that the port to start the binding must be consistent with the port set by the upstream in nginx.
Change host: Add a DNS record in the host file under C:\Windows\System32\drivers\etc

127.0.0.1 www.helloworld.com

Start the startup.bat command
in the previous article. Visit www.helloworld.com in your browser. No surprises, you can already access it.

Load balancing configuration

In the previous example, the proxy only points to one server.

However, in the actual operation of the website, most of the servers are running the same app, and load balancing is needed to divert traffic.
Nginx can also implement simple load balancing functions.

Assume such an application scenario: deploy the application on three linux environment servers, 192.168.1.11:80, 192.168.1.12:80, and 192.168.1.13:80. The domain name of the website is www.helloworld.com, and the public IP is 192.168.1.11. Deploy nginx on the server where the public IP is located, and load balance all requests.

The nginx.conf configuration is as follows:

http {
    
    
     #设定mime类型,类型由mime.type文件定义
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
    #设定日志格式
    access_log    /var/log/nginx/access.log;

    #设定负载均衡的服务器列表
    upstream load_balance_server {
    
    
        #weigth参数表示权值,权值越高被分配到的几率越大
        server 192.168.1.11:80   weight=5;
        server 192.168.1.12:80   weight=1;
        server 192.168.1.13:80   weight=6;
    }

   #HTTP服务器
   server {
    
    
        #侦听80端口
        listen       80;

        #定义使用www.xx.com访问
        server_name  www.helloworld.com;

        #对所有请求进行负载均衡请求
        location / {
    
    
            root        /root;                 #定义服务器的默认网站根目录位置
            index       index.html index.htm;  #定义首页索引文件的名称
            proxy_pass  http://load_balance_server ;#请求转向load_balance_server 定义的服务器列表

            #以下是一些反向代理的配置(可选择性配置)
            #proxy_redirect off;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            #后端的Web服务器可以通过X-Forwarded-For获取用户真实IP
            proxy_set_header X-Forwarded-For $remote_addr;
            proxy_connect_timeout 90;          #nginx跟后端服务器连接超时时间(代理连接超时)
            proxy_send_timeout 90;             #后端服务器数据回传时间(代理发送超时)
            proxy_read_timeout 90;             #连接成功后,后端服务器响应时间(代理接收超时)
            proxy_buffer_size 4k;              #设置代理服务器(nginx)保存用户头信息的缓冲区大小
            proxy_buffers 4 32k;               #proxy_buffers缓冲区,网页平均在32k以下的话,这样设置
            proxy_busy_buffers_size 64k;       #高负荷下缓冲大小(proxy_buffers*2)
            proxy_temp_file_write_size 64k;    #设定缓存文件夹大小,大于这个值,将从upstream服务器传

            client_max_body_size 10m;          #允许客户端请求的最大单文件字节数
            client_body_buffer_size 128k;      #缓冲区代理缓冲用户端请求的最大字节数
        }
    }
}

The website has multiple webapp configurations

When a website has more and more functions, some modules with relatively independent functions need to be stripped out and maintained independently. In this case, usually, there will be multiple webapps.

For example: if there are several sites on www.helloworld.com

webapp, finance (finance), product (product), admin (user center). The way to access these applications is differentiated by context:

www.helloworld.com/finance/
www.helloworld.com/product/
www.helloworld.com/admin/

We know that the default port number of http is 80. If you start these three webapp applications on a server at the same time, all use port 80, which is definitely not possible. Therefore, these three applications need to be bound to different port numbers.

So, here comes the problem. When users actually visit the www.helloworld.com site and visit different webapps, they will never visit with the corresponding port number. So, you need to use a reverse proxy again for processing.

Configuration is not difficult, let's see how to do it:

http {
    
    
    #此处省略一些基本配置

    upstream product_server{
    
    
        server www.helloworld.com:8081;
    }

    upstream admin_server{
    
    
        server www.helloworld.com:8082;
    }

    upstream finance_server{
    
    
        server www.helloworld.com:8083;
    }

    server {
    
    
        #此处省略一些基本配置
        #默认指向product的server
        location / {
    
    
            proxy_pass http://product_server;
        }

        location /product/{
    
    
            proxy_pass http://product_server;
        }

        location /admin/ {
    
    
            proxy_pass http://admin_server;
        }

        location /finance/ {
    
    
            proxy_pass http://finance_server;
        }
    }
}

https reverse proxy configuration

Some sites with high security requirements may use HTTPS (a secure HTTP protocol that uses the ssl communication standard).

The HTTP protocol and SSL standards are not popular here. However, you need to know a few things to configure https with nginx:

The fixed port number of HTTPS is 443, which is different from HTTP’s 80 port
SSL standard, which requires the introduction of a security certificate, so you need to specify the certificate and its corresponding key in nginx.conf.
Others are basically the same as http reverse proxy, but configured in the Server part slightly different.

 #HTTP服务器
  server {
    
    
      #监听443端口。443为知名端口号,主要用于HTTPS协议
      listen       443 ssl;

      #定义使用www.xx.com访问
      server_name  www.helloworld.com;

      #ssl证书文件位置(常见证书文件格式为:crt/pem)
      ssl_certificate      cert.pem;
      #ssl证书key位置
      ssl_certificate_key  cert.key;

      #ssl配置参数(选择性配置)
      ssl_session_cache    shared:SSL:1m;
      ssl_session_timeout  5m;
      #数字签名,此处使用MD5
      ssl_ciphers  HIGH:!aNULL:!MD5;
      ssl_prefer_server_ciphers  on;

      location / {
    
    
          root   /root;
          index  index.html index.htm;
      }
  }

Static site configuration

Sometimes, we need to configure static sites (ie html files and a bunch of static resources).

For example: if all the static resources are placed in the /app/dist directory, we only need to specify the homepage and the host of this site in nginx.conf.

The configuration is as follows:

worker_processes  1;

events {
    
    
    worker_connections  1024;
}

http {
    
    
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

    gzip on;
    gzip_types text/plain application/x-javascript text/css application/xml text/javascript application/javascript image/jpeg image/gif image/png;
    gzip_vary on;

    server {
    
    
        listen       80;
        server_name  static.zp.cn;

        location / {
    
    
            root /app/dist;
            index index.html;
            #转发任何请求到 index.html
        }
    }
}

Then, add HOST:
127.0.0.1 static.zp.cn
At this time, visit static.zp.cn in the local browser, you can visit the static site.

Build a file server

Sometimes, the team needs to archive some data or information, then the file server is essential. Using Nginx can quickly and easily build a simple file service.

Main points of configuration in Nginx:

Turn on autoindex to display the directory, which is disabled by default.
Turn on autoindex_exact_size to display the file size.
Turn on autoindex_localtime to display the modification time of the file.
root is used to set the root path open for file service.
Charset is set to charset utf-8,gbk; to avoid the problem of Chinese garbled characters (
after setting under the windows server, the garbled characters are still there, and I have not found a solution for the time being).
The most simplified configuration is as follows:

autoindex on;# 显示目录
autoindex_exact_size on;# 显示文件大小
autoindex_localtime on;# 显示文件时间

server {
    
    
    charset      utf-8,gbk; # windows 服务器下设置后,依然乱码,暂时无解
    listen       9050 default_server;
    listen       [::]:9050 default_server;
    server_name  _;
    root         /share/fs;
}

Cross-domain solution

In the development of the web field, the front-end and back-end separation mode is often used. In this mode, the front end and the back end are separate web applications. For example, the back end is a Java program, and the front end is a React or Vue application.

Separate web apps are bound to have cross-domain problems when accessing each other. There are generally two ways of solving cross-domain problems:

1 、 CORS

Set the HTTP response header on the back-end server, and add the domain name you need to access to in Access-Control-Allow-Origin.

2 、 jsonp

According to the request, the back end constructs json data and returns it. The front end uses jsonp to cross domains.
These two ideas will not be discussed in this article.

It should be noted that nginx also provides a cross-domain solution based on the first idea.

Example: The www.helloworld.com website is composed of a front-end app and a back-end app. The front-end port number is 9000, and the back-end port number is 8080.

If the front-end and back-end use http to interact, the request will be rejected because of cross-domain issues. Let's see how nginx solves it:

First, set cors in the enable-cors.conf file:

# allow origin list
set $ACAO '*';

# set single origin
if ($http_origin ~* (www.helloworld.com)$) {
    
    
  set $ACAO $http_origin;
}

if ($cors = "trueget") {
    
    
    add_header 'Access-Control-Allow-Origin' "$http_origin";
    add_header 'Access-Control-Allow-Credentials' 'true';
    add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
    add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
}

if ($request_method = 'OPTIONS') {
    
    
  set $cors "${cors}options";
}

if ($request_method = 'GET') {
    
    
  set $cors "${cors}get";
}

if ($request_method = 'POST') {
    
    
  set $cors "${cors}post";
}

Next, include enable-cors.conf in your server to introduce cross-domain configuration:

# ----------------------------------------------------
# 此文件为项目 nginx 配置片段
# 可以直接在 nginx config 中 include(推荐)
# 或者 copy 到现有 nginx 中,自行配置
# www.helloworld.com 域名需配合 dns hosts 进行配置
# 其中,api 开启了 cors,需配合本目录下另一份配置文件
# ----------------------------------------------------
upstream front_server{
    
    
  server www.helloworld.com:9000;
}
upstream api_server{
    
    
  server www.helloworld.com:8080;
}

server {
    
    
  listen       80;
  server_name  www.helloworld.com;

  location ~ ^/api/ {
    
    
    include enable-cors.conf;
    proxy_pass http://api_server;
    rewrite "^/api/(.*)$" /$1 break;
  }

  location ~ ^/ {
    
    
    proxy_pass http://front_server;
  }
}

At this point, it is complete.

Insert picture description here

Guess you like

Origin blog.csdn.net/liuxingjiaoyu/article/details/112346773