Basic introduction to Nginx + cross-domain solution

Author: Whale Teng FE Source: Hang Seng LIGHT Cloud Community

Introduction to Nginx

Nginx is a high-performance http server/reverse proxy server and email (IMAP/POP3) proxy server developed by Russian programmer Igor Sysoev. Its main functions are:

  • reverse proxy
  • load balancing
  • HTTP server

At present, most of the running Nginx servers are using its load balancing function as the system architecture of the service cluster.

Function Description

In the above, the three main functions of Nginx were introduced. Let's talk about the role of each function.

1. Reverse Proxy

Before introducing reverse proxy, let's first understand the concept of forward proxy.

For example, you are going to watch Jay Chou's tour, but you find that the official channels have sold out the tickets, so you have to ask your friend A to buy tickets in-house, and you get this ticket as you wish. In this process, friend A acts as a forward proxy, that is, it acts as a proxy for the client (you) to send requests to the server (ticket seller), but the server (ticket seller) does not know who initiated the source Request, only know that the proxy service (friend A) requested it from itself.

From this example, let's understand the reverse proxy. For example, we often receive calls from 10086 or 10000, but the people who call are different each time. This is because 10086 is the switchboard number of China Mobile, and the extension calls the user. When the number is displayed through the switchboard agent, at this time the client (you) cannot know who initiated the request, but only knows that the proxy service (switchboard) requested it from itself.

The official explanation is that the reverse proxy method refers to the use of a proxy server to accept connection requests on the Internet, then forward the request to the server on the internal network, and return the result obtained from the server to the requesting connection on the Internet. At this time, the proxy server acts as a reverse proxy server to the outside world.

Let's paste a simple Nginx configuration code to implement reverse proxy:

server {  
    listen       80;                                                   
    server_name  localhost;                                         
    client_max_body_size 1024M;
  
    location / {
        proxy_pass http://localhost:8080;
        proxy_set_header Host $host:$server_port;
    }
}

Among them, http://localhost:8080 is the target server of the anti-proxy, and 80 is the port that Nginx exposes to client access.

2. Load Balance**

Load balancing, as the name implies, is to distribute the service load to multiple server units in a balanced manner to improve the performance and reliability of services such as websites and applications. Let's compare the two system topologies, the first is the topology without load balancing:

1.png

The following is the topology with load balancing designed:

Untitled Drawing 2.png

As can be seen from Figure 2, the user accesses the load balancer, and the load balancer forwards the request to the backend server. In this case, after service C fails, the user access load will be distributed to service A and service B. The system crash is avoided. If such a failure occurs in Figure 1, the system will definitely crash directly.

load balancing algorithm

The load balancing algorithm determines which healthy servers on the backend are selected. Several commonly used algorithms:

  • Round Robin: Selects the first server in the list for the first request, then moves down the list in order until the end, then loops.
  • Least Connections (minimum connections): Select the server with the least number of connections first, which is recommended when the general session is long.
  • Source: Select the server to forward based on the hash of the IP of the request source. In this way, it is possible to ensure that certain users can connect to the same server to a certain extent.

If your application needs to handle state and require the user to be able to connect to the same server as before. Associations can be created based on the client's IP information through the Source algorithm, or using sticky sessions.

Load balancing also needs to cooperate with the reverse proxy function to play its role.

3. HTTP server

In addition to the above two functions, Nginx can also be used as a static resource server. For example, pure front-end resources that do not use SSR (Server Side Render) can rely on Nginx to achieve resource hosting. Let's take a look at a configuration that implements a static resource server:

server {
    listen       80;                                                 
    server_name  localhost;                                       
    client_max_body_size 1024M;
  
    location / {
        root   e:\wwwroot;
        index  index.html;
    }
}

rootThe configuration is the root directory where specific resources are stored, and the indexconfiguration is the default file when accessing the root directory.

Dynamic and static separation

Dynamic and static separation is also an important concept used by Nginx as an Http server. To understand dynamic and static separation, we must first understand what dynamic resources are and what are static resources:

  • Dynamic resources: The resource content that needs to be obtained from the server in real time, such as JSP, SSR rendering pages, etc., the resource content will change when accessed at different times.
  • Static resources: such as JS, CSS, Img, etc., the content of the resources will not change when accessed at different times.

Since Nginx can be used as a static resource server, but cannot host dynamic resources, when there is a scenario that requires dynamic and static separation, we need to split the access policies for static and dynamic resources:

upstream test{  
    server localhost:8080;  
    server localhost:8081;  
}   

server {  
    listen       80;  
    server_name  localhost;  
  
    location / {  
        root   e:\wwwroot;  
        index  index.html;  
    }  
  
    # 所有静态请求都由nginx处理,存放目录为html  
    location ~ \.(gif|jpg|jpeg|png|bmp|swf|css|js)$ {  
        root    e:\wwwroot;  
    }  
  
    # 所有动态请求都转发给tomcat处理  
    location ~ \.(jsp|do)$ {  
        proxy_pass  http://test; 
    }  
  
    error_page   500 502 503 504  /50x.html;  
    location = /50x.html {  
        root   e:\wwwroot;  
    }  
}  

It can be roughly understood from this configuration that when the client accesses different types of resources, Nginx will automatically assign it to its own static resource service or remote dynamic resource service according to the type, so as to satisfy the needs of a complete resource server. function.

Configuration introduction

1. Basic introduction

After talking about the functions of Nginx, let's briefly introduce the configuration files of Nginx. As a front-end person, using Nginx is basically to modify the configuration -> start/warm restart Nginx, and you can do most of the daily work related to Nginx.

Here we look at the default configuration of Nginx, that is, the content of the default nginx.conf file after Nginx is installed:


#user  nobody;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;
  
    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';
  
    #access_log  logs/access.log  main;
  
    sendfile        on;
    #tcp_nopush     on;
  
    #keepalive_timeout  0;
    keepalive_timeout  65;
  
    #gzip  on;
  
    server {
        listen       80;
        server_name  localhost;
  
        #charset koi8-r;
  
        #access_log  logs/host.access.log  main;
  
        location / {
            root   html;
            index  index.html index.htm;
        }
  
        #error_page  404              /404.html;
  
        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
  
        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ \.php$ {
        #    proxy_pass   http://127.0.0.1;
        #}
  
        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ \.php$ {
        #    root           html;
        #    fastcgi_pass   127.0.0.1:9000;
        #    fastcgi_index  index.php;
        #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        #    include        fastcgi_params;
        #}
  
        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        #    deny  all;
        #}
    }
  
  
    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server {
    #    listen       8000;
    #    listen       somename:8080;
    #    server_name  somename  alias  another.alias;
  
    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}
  
  
    # HTTPS server
    #
    #server {
    #    listen       443 ssl;
    #    server_name  localhost;
  
    #    ssl_certificate      cert.pem;
    #    ssl_certificate_key  cert.key;
  
    #    ssl_session_cache    shared:SSL:1m;
    #    ssl_session_timeout  5m;
  
    #    ssl_ciphers  HIGH:!aNULL:!MD5;
    #    ssl_prefer_server_ciphers  on;
  
    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}
  
}

The corresponding structure is roughly:

...              #全局块

events {         #events块
    ...
}

http      #http块
{
    ...   #http全局块
        server        #server块
        { 
        ...       #server全局块
            location [PATTERN]   #location块
            {
            ...
        }
        location [PATTERN] 
            {
            ...
        }
    }
    server
        {
        ...
    }
    ...     #http全局块
}

The corresponding functions of the above code blocks are:

  • Global block: Configure directives that affect Nginx globally. Generally, there are user groups running the Nginx server, Nginx process pid storage path, log storage path, configuration file introduction, and the number of worker processes that are allowed to be generated.
  • events block: The configuration affects the Nginx server or the network connection to the user. There is a maximum number of connections per process, which event-driven model is selected to handle connection requests, whether multiple network connections are allowed to be accepted at the same time, and multiple network connection serialization is enabled.
  • http block: You can nest multiple servers, configure proxy, cache, log definition and most other functions and configuration of third-party modules. Such as file introduction, mime-type definition, log customization, whether to use sendfile to transfer files, connection timeout time, number of single connection requests, etc.
  • Server block: Configure the relevant parameters of the virtual host. There can be multiple servers in one http.
  • Location block: Configure the routing of requests and the processing of various pages.

For the detailed configuration of each code block, please refer to the Nginx documentation

2. Nginx solves cross-domain problems

The following shows a location code block that is often used to deal with front-end cross-domain problems. In terms of readers, understand and use Nginx to solve cross-domain problems.

location /cross-server/ {
    set $corsHost $http_origin;
    set $allowMethods "GET,POST,OPTIONS";
    set $allowHeaders "broker_key,X-Original-URI,X-Request-Method,Authorization,access_token,login_account,auth_password,user_type,tenant_id,auth_code,Origin, No-Cache, X-Requested-With, If-Modified-Since, Pragma, Last-Modified, Cache-Control, Expires, Content-Type, X-E4M-With, usertoken";
  
    if ($request_method = 'OPTIONS'){
        add_header 'Access-Control-Allow-Origin' $corsHost always;
        add_header 'Access-Control-Allow-Credentials' true always;
        add_header 'Access-Control-Allow-Methods' $allowMethods always;
        add_header 'Access-Control-Allow-Headers' $allowHeaders;
        add_header 'Access-Control-Max-Age' 90000000;
        return 200;
    }
  
    proxy_hide_header Access-Control-Allow-Headers;
    proxy_hide_header Access-Control-Allow-Origin;
    proxy_hide_header Access-Control-Allow-Credentials;
    add_header Access-Control-Allow-Origin $corsHost always;
    add_header Access-Control-Allow-Methods $allowMethods always;
    add_header Access-Control-Allow-Headers $allowHeaders;
    add_header Access-Control-Allow-Credentials true always;
    add_header Access-Control-Expose-Headers *;
    add_header Access-Control-Max-Age 90000000;
  
    proxy_pass http://10.117.20.54:8000/;
    proxy_set_header        Host   $host:443;
    proxy_set_header        X-Forwarded-For         $remote_addr;
    proxy_redirect http:// $scheme://; 
  
}     

It can be seen setthat locationthe local variables in the setting are used in the previous section, and then these variables are used in the configuration of various instructions below. The following are the functions of each instruction:

  • add_header: used to add a return header field to the request , valid if and only if the status code is those listed below : 200, 201 (1.3.10), 204, 206, 301, 302, 303, 304, 307 (1.1 .16, 1.0.13), or 308 (1.13.0)
  • **proxy_hide_heade:** can hide information in response headers.
  • **proxy_redirect:** Specifies to modify the value of the location header field and the refresh header field in the response header returned by the proxy server.
  • **proxy_set_header:** Redefine the request header sent to the backend server.
  • **proxy_pass:** Proxy's forwarding service path.

The above configuration can be directly copied to nginx.conf, and then modified /cross-server/(the path exposed by Nginx to the client) and http://10.117.20.54:8000/(the forwarded service path) to avoid cross-domain service problems.

Cross-domain skills supplement

In the development environment, if you don't want to use Nginx to deal with cross-domain debugging, you can also modify the Chrome configuration to achieve cross-domain debugging. In essence, cross-domain is a browser security policy, so start from the browser to solve this problem The problem is more convenient.

Windows system:

1. Copy the chrome browser shortcut, right-click on the shortcut icon and open "Properties" as shown in the figure:

Untitled Drawing 3.png

2. Add after "target" --disable-web-security --user-data-dir, for example, after the modification in the picture is completed: "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --disable-web-security --user-data-dir.

3. Click OK and reopen the browser, and the following will appear:

Untitled Drawing 4.png

At this point, the cross-domain shielding settings are modified, and the pages accessed by clicking this shortcut will ignore the cross-domain rules, avoiding the trouble of configuring cross-domain on the server side in the development environment.

Mac system:

The following content is reproduced from: Solve Chrome browser cross-domain problems on Mac

First create a folder, this folder is used to save the user information after closing the security policy, the name can be arbitrarily chosen, and the location can also be arbitrarily placed.

5.png

Create a folder

Then open the console and enter the following codeopen -n /Applications/Google\ Chrome.app/ --args --disable-web-security --user-data-dir=/Users/LeoLee/Documents/MyChromeDevUserData

6.png

Close security policy code

You need to change the above code according to the address of the folder you just created, which is the red frame area in the figure below, and most of the online tutorials are missing this part of the code, which causes many users to close the security strategy failed

7.png

Users need to modify the code according to their own folder address

Enter the code, hit enter, and then Chrome should pop up a window

8.png

Chrome popup

Click to start Google Chrome, you will find that compared with the previous Chrome, the Chrome at this time has a prompt above, telling you that the mode you are using is not safe

9.png

There will be an extra line at the top of the browser

The principle is similar to the Windows version, which bypasses the security policy by modifying the configuration.

{{o.name}}
{{m.name}}

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=324078127&siteId=291194637