Nginx introductory tutorial (rpm) nginx Easy Tutorial

Original: https://www.cnblogs.com/qdhxhz/p/8910174.html

nginx Getting Started Tutorial

I. Overview  

   What is nginx?

  Nginx (engine x)  is a lightweight Web server, reverse proxy server and e-mail (IMAP / POP3) proxy server.

  What is a reverse proxy?

   Reverse proxy (Reverse Proxy) mode refers to the proxy server to accept connection requests on the internet, and then forwards the request to the server on the internal network, and returns the result obtained from the server to the client on request internet connection, At this point the external proxy server on the performance of a reverse proxy server.

     

Second, the commonly used commands

       nginx is relatively simple to use, is a few commands.  

Copy the code
nginx -s stop # fast closing Nginx, relevant information may not be saved and quickly terminate the web service. 
nginx -s quit # smooth closing Nginx, save information, have scheduled end of web services. 
nginx -s reload # Nginx due to changes in the configuration, you need to reload the configuration and reload. 
nginx -s reopen # to re-open the log file. 
Nginx nginx -c filename # specified as a configuration file, instead of the default. 
nginx -t # do not run, but only to test the profile. nginx will check the correctness of the syntax of the configuration file and try to open the file in your profile referenced to. 
nginx nginx -v # show version. 
nginx nginx -V # show version, compiler version and configuration parameters.
Copy the code

    I entered in the sbin directory in linux: ./ nginx nginx instead of the above

   If you do not want to knock on every command, a start can be added a new batch file in the installation directory nginx startup.bat , double-click to run. It reads as follows:

Copy the code
OFF @echo 
#rem If you already started before the start nginx and record the pid file, it will kill the specified process 
nginx.exe -s STOP 

#rem test configuration file syntax correctness 
nginx.exe -t -c conf / nginx.conf 

#rem display version information 
nginx.exe -v 

#rem in accordance with the specified configuration to start the nginx 
nginx.exe -c conf / nginx.conf
Copy the code

      If you are running on Linux, write a shell script, very much the same.

nginx configuration combat

     There are: http reverse proxy configuration, load balancing configuration, the site has multiple webapp configuration, https reverse proxy configuration, static site configuration, cross-domain solutions

Three, http reverse proxy configuration

   Let's achieve a small goal: do not consider the complexity of the configuration, just completed a reverse http proxy.

  nginx.conf configuration file as follows

Copy the code
# Run user 
#user Somebody; 

# start the process, and usually set to equal the number of cpu 
worker_processes 1; 

# global error log 
error_log D: /Tools/nginx-1.10.1/logs/error.log; 
error_log D: / Tools / 1.10.1-nginx / logs / notice.log Notice; 
error_log D: /Tools/nginx-1.10.1/logs/info.log info; 

# pID file, record the current boot nginx process ID 
pid D: / Tools / 1.10.1-Nginx / logs / nginx.pid; 

# limit the number of connections and the operation mode 
Events { 
    worker_connections 1024; # single background process worker process the maximum number of concurrent links 
} 

# set http server using its reverse proxy functions provided load balancing support 
HTTP { 
    # set mime type (mail support type), a type mime.types file defines 
    the include D: /Tools/nginx-1.10.1/conf/mime.types; 
    default_type file application / OCTET-Stream;
     
    # provided given log
    main log_format '[$ REMOTE_ADDR] - [$ REMOTE_USER] [$ time_local] "$ Request"' 
                      '$ $ body_bytes_sent Status "$ HTTP_REFERER"' 
                      ' "$ HTTP_USER_AGENT" "$ HTTP_X_FORWARDED_FOR"'; 
                      
    access_log D: / Tools / nginx- 1.10.1 / logs / access.log main; 
    rewrite_log ON; 
    
    #sendfile directive specifies whether nginx sendfile call function (zero copy mode) to output files for common applications, 
    # must be set to on, if used to download applications such as disk IO heavy-duty applications, may be set to off, in order to balance the disk and network I / O processing speed and reduce the system uptime. 
    the sendfile ON; 
    #tcp_nopush ON; 

    # connection time 
    keepalive_timeout 120; 
    TCP_NODELAY ON; 
    
    #gzip compression switch 
    #gzip on ;
 
    # Set realistic list of servers 
    upstream zp_server1 {
        127.0.0.1:8089 Server; 
    } 

    #http server 
    Server { 
        # listening port 80, port 80 is a well-known port number for HTTP protocol 
        the listen 80; 
        
        # define access using www.xx.com 
        server_name www.helloworld.com; 
        
        # Home 
        index.html index 
        
        # webapp directory pointing to 
        the root D: \ 01_Workspace \ the Project \ GitHub \ ZP \ SpringNotes \ Spring-Security \ Spring-Shiro \ the src \ main \ webapp; 
        
        # encoding format 
        charset UTF-. 8; 
        
        # proxy configuration parameters 
        proxy_connect_timeout 180 [; 
        proxy_send_timeout 180 [; 
        proxy_read_timeout 180 [; 
        proxy_set_header the Host Host $; 
        proxy_set_header the For-$ Forwarder-X-REMOTE_ADDR;

        # Reverse proxy path (upstream and bound), provided behind the location of the path map 
        location / { 
            proxy_pass HTTP: // zp_server1; 
        } 

        # static file, Nginx own processing 
        location ~ ^ / (images | javascript | js | css | Flash | Media | static) / { 
            root D: \ 01_Workspace \ Project \ GitHub \ ZP \ SpringNotes \ the Spring-Security \ the Spring-shiro \ src \ main \ the webapp \ views; 
            # expire 30 days, less static file updates, expired It can be set larger, if frequent updates, you can set smaller. 
            30d Expires; 
        } 
    
        # Check Nginx state setting address 
        LOCATION / NginxStatus { 
            stub_status ON; 
            access_log ON; 
            auth_basic "NginxStatus"; 
            auth_basic_user_file the conf / the htpasswd; 
        } 
    
        # prohibit file access .htxxx
        {~ /\.ht LOCATION 
            the deny All; 
        } 
        
        # page error processing (optional configuration) 
        #error_page 404 /404.html; 
        #error_page 500 502 503 504 /50x.html; 
        #location /50x.html = { 
        # the root HTML; 
        #} 
    } 
Copy the code

Well, let's try it:

  1. Start webapp, attention is bound to start port of nginx and  upstream port settings consistent.
  2. Change the host: in C: \ Windows \ System32 \ host file in the drivers \ etc directory, add a DNS record

    127.0.0.1 www.helloworld.com 
  3. Start earlier in order startup.bat
  4. Access www.helloworld.com in the browser, not surprisingly, already visited.

         If you are in linux, hosts file: etc \ hosts

Fourth, the load balancing configuration

In the previous example, only point to a proxy server.

However, the actual site of operations, most of all have multiple servers running the same app, then you need to use load balancing to split.

nginx can also achieve a simple load balancing.

Assume that such a scenario: an application deployed on 192.168.1.11:80,192.168.1.12:80,192.168.1.13:80 three linux server environment. Domain called www.helloworld.com, public IP is 192.168.1.11. Nginx deployed on the server where the public IP, load balancing process all requests.

nginx.conf  configuration is as follows:

Copy the code
{HTTP 
     # set the mime type, file type defined by the mime.type 
    the include /etc/nginx/mime.types; 
    default_type file application / OCTET-Stream; 
    # log format set 
    access_log /var/log/nginx/access.log; 

    # setting load balancing server list 
    {upstream load_balance_server 
        probability parameter indicates #weigth weights, higher weights are assigned to the larger 
        server 192.168.1.11:80 weight =. 5; 
        server = weight 192.168.1.12:80. 1; 
        server 192.168 .1.13: weight = 80. 6; 
    } 

   #http server 
   server { 
        # listener port 80 
        the listen 80; 
        
        # define access using www.xx.com 
        server_name www.helloworld.com; 

        # load balancing requests to all requests 
        location / {
            root / root; the default web root directory location # define the server's 
            index index.html index.htm; the name of the index file #define Home 
            proxy_pass http: // load_balance_server; # request turned load_balance_server defined list of servers 

            # The following are some of the reverse proxy configuration (optional configuration) 
            #proxy_redirect OFF; 
            proxy_set_header $ Host Host; 
            proxy_set_header the X-real-IP-$ REMOTE_ADDR; 
            # back-end Web server can obtain user IP through real-Forwarded-the For the X- 
            proxy_set_header the X-Forwarded-the For-$ REMOTE_ADDR ; 
            proxy_connect_timeout 90; #nginx connection time with a backend server (proxy connection timeout) 
            proxy_send_timeout 90; # backend server data return time (send timeout)  
            proxy_read_timeout 90; # after successful connection, the response time of the backend server (proxy-received time out)
            proxy_buffer_size 4K; # proxy settings (Nginx) to save the user header information buffer District size 
            proxy_buffers 4 32k; #proxy_buffers buffer, average pages 32k or less, arranged such 
            proxy_busy_buffers_size 64k; # buffer size at a high load (proxy_buffers * 2) 
            proxy_temp_file_write_size 64k; # set cache folder size greater than this value, from the upstream server transfer 
            
            client_max_body_size 10m; # allows clients to request a maximum number of bytes single file 
            client_body_buffer_size 128k; the maximum buffer # bytes requested by the client buffering agent number 
        } 
    } 
}
Copy the code

V. site has multiple configurations webapp

When a website more functional, often need some independent functional modules spin-off, independent maintenance. In this case, usually, there will be more webapp.

For example: If www.helloworld.com site has several webapp, finance (finance), product (product), admin (User Center). Access to these applications are distinguished by the context (context):

www.helloworld.com/finance/

www.helloworld.com/product/

www.helloworld.com/admin/

We know that the default port number for http is 80, if you start these three webapp applications simultaneously on a single server, with all 80 ports, definitely succeed. So, these three applications need to bind a different port number, respectively.

So, the question is, when the user actually accessing www.helloworld.com site, access to different webapp, you would not also with corresponding port number to access it. So, again, you need to use a reverse proxy to do the processing.

Configuration is not difficult to see how to do it:

Copy the code
{HTTP 
    # some basic configuration will be omitted here     
    upstream product_server { 
        Server www.helloworld.com:8081; 
    }     
    upstream the admin_server { 
        Server www.helloworld.com:8082; 
    }     
    upstream finance_server { 
        Server www.helloworld.com:8083; 
    } 

    Server { 
        # omitted here some basic configuration 
        # defaults to the product of the Server 
        LOCATION / { 
            proxy_pass HTTP: // product_server; 
        } 
        LOCATION / product / { 
            proxy_pass HTTP: // product_server; 
        } 
        LOCATION / ADMIN / {  
            proxy_pass HTTP: // the admin_server;
        }         
        LOCATION / Finance / {
            proxy_pass http://finance_server;
        }
    }
}
Copy the code

Six, https reverse proxy configuration

Some of the higher security requirements of the site, may use HTTPS (ssl communication using standard HTTP protocol security).

Here is not science HTTP protocol and SSL standards. However, using https nginx configuration needs to know that:

  • HTTPS fixed port number is 443, unlike HTTP port 80
  • Standard SSL security certificate needs to be introduced, so nginx.conf you need to specify the certificate and its corresponding key

Other http and reverse proxy is basically the same, but in  Server some different parts of the configuration.

Copy the code
  #HTTP server 
  Server { 
      # monitor 443 port. 443 is a well-known port numbers, mainly used for HTTPS protocol 
      the listen 443 ssl; 

      # definitions used www.xx.com access 
      server_name www.helloworld.com; 

      #ssl certificate file location (common certificate format: CRT / PEM) 
      ssl_certificate CERT. PEM; 
      #ssl certificate key position 
      ssl_certificate_key cert.key; 

      #ssl configuration parameters (optional configuration) 
      ssl_session_cache Shared: the SSL: 1M; 
      ssl_session_timeout 5m; 
      # digital signature, as used herein the MD5 
      ssl_ciphers to HIGH: aNULL: the MD5;!! 
      ssl_prefer_server_ciphers ON ; 

      LOCATION / { 
          the root / the root; 
          index index.html index.htm; 
      } 
  }
Copy the code

Seven, static site configuration

Sometimes, we need to configure a static site (ie a bunch of static html files and resources).

For example: If all the static resources are placed in the  /app/dist directory, we only need the  nginx.conf host can be specified as well as the site's home page.

Configuration is as follows:

Copy the code
worker_processes  1;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

    gzip on;
    gzip_types text/plain application/x-javascript text/css application/xml text/javascript application/javascript image/jpeg image/gif image/png;
    gzip_vary on;

    server {
        listen       80;
        server_name  static.zp.cn;

        location / {
            root /app/dist;
            index index.html;
            #转发任何请求到 index.html
        }
    }
}
Copy the code

Then, HOST:

127.0.0.1 static.zp.cn

In this case, access static.zp.cn in the local browser, you can access a static site.

Eight cross-domain solutions

web development field, separated front and rear ends frequently used mode. In this mode, the front and rear ends are each independently of the web application, for example: the rear end is a Java program, a distal React Vue or application.

Separate web app when visiting each other, there is bound to cross-domain problems. Solve the problem of cross-domain There are two general ideas:

  1. HEARTS

Set HTTP response header at the back-end server, you need to run the domain name is added to join  Access-Control-Allow-Origin in.

  1. jsonp

The rear end of the request, json configuration data, and returns, with the distal end cross-domain jsonp.

These two ideas, this article does not discuss.

It should be noted, nginx according to the first idea, but also provides a solution to cross-domain solutions.

For example: www.helloworld.com site was created by a front-end app, consisting of a back-end app. Front-end port number is 9000, the port number is 8080.

If you use the front and rear http interact request will be rejected because of the cross-domain problems. Take a look, nginx is how to solve it:

First, set the enable-cors.conf cors file:

Copy the code
# allow origin list
set $ACAO '*';

# set single origin
if ($http_origin ~* (www.helloworld.com)$) {
  set $ACAO $http_origin;
}

if ($cors = "trueget") {
    add_header 'Access-Control-Allow-Origin' "$http_origin";
    add_header 'Access-Control-Allow-Credentials' 'true';
    add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
    add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
}

if ($request_method = 'OPTIONS') {
  set $cors "${cors}options";
}

if ($request_method = 'GET') {
  set $cors "${cors}get";
}

if ($request_method = 'POST') {
  set $cors "${cors}post";
}
Copy the code

接下来,在你的服务器中 include enable-cors.conf 来引入跨域配置:

Copy the code
# ----------------------------------------------------
# 此文件为项目 nginx 配置片段
# 可以直接在 nginx config 中 include(推荐)
# 或者 copy 到现有 nginx 中,自行配置
# www.helloworld.com 域名需配合 dns hosts 进行配置
# 其中,api 开启了 cors,需配合本目录下另一份配置文件
# ----------------------------------------------------
upstream front_server{
  server www.helloworld.com:9000;
}
upstream api_server{
  server www.helloworld.com:8080;
}

server {
  listen       80;
  server_name  www.helloworld.com;

  location ~ ^/api/ {
    include enable-cors.conf;
    proxy_pass http://api_server;
    rewrite "^/api/(.*)$" /$1 break;
  }

  location ~ ^/ {
    proxy_pass http://front_server;
  }
}
Copy the code

    This, is complete.

  reference   

    nginx Easy Tutorial

Guess you like

Origin www.cnblogs.com/ajianbeyourself/p/11129435.html