nginx reverse proxy, load balancing, cross-domain issues

nginx application scenarios

1, http server: Nginx is an http server http service can provide independent. It can be used as a static web server.
2, Virtual Host: You can work on a single server virtualized multiple sites. Such as web hosting personal Web site.

3, reverse proxy: When the site's traffic reaches a certain level, a single host can not meet the user's request, the need to use multiple servers clusters can be used to make Nginx reverse proxy.
4, load balancing: average and multiple servers can share the load, the situation is not because a server is down while loading a Taiwan high idle.

1, http reverse proxy configuration

server {
//被监听的端口号
		listen       8080;                                                         
		server_name  localhost;                                               
		client_max_body_size 1024M;
	 
		location / {
		//请求转发到到的端口号
			proxy_pass http://localhost:8082;
			proxy_set_header Host $host:$server_port;
		}
	}

After saving the configuration file restart Nginx, when accessing localhost, the equivalent access localhost: 8080 a

2, load balancing configuration

2.1, load balancing configuration

Load balancing is a common feature Nginx, meaning that the request service allocated to the plurality of execution units, such as Web servers, FTP servers, etc., so as to jointly complete tasks. In simple terms it is that when there are two or more sets of servers, distributed according to a random rule request to the specified server process, load balancing configuration generally need to configure the reverse proxy, to jump through the reverse proxy load balancer. The Nginx currently supports three kinds of native load balancing strategy, there are two kinds of popular third-party policy.

2.1.1、 RR

This is the default strategy Nginx each request individually assigned to a different time order back-end server, if the back-end server is down, can be automatically removed.

upstream test {
    server localhost:8080;
    server localhost:8081;
}
server {
    listen       80;                                                         
    server_name  localhost;                                               
    client_max_body_size 1024M;
 
    location / {
        proxy_pass http://test;
        proxy_set_header Host $host:$server_port;
    }
}

If the upstream service in one of the 8080 hung up, visit http: // localhost time, there will not be a problem, will default to jump to http: // localhost: 8081 ,. Because the state will automatically determine the server Nginx, if the server is not accessible, it will not jump to this server, so it avoids a server linked to the situation affect the use, this is the default policy Nginx RR.

2.1.2 Weight

Polling each service specified probability proportional to weight ratio and access, for the case where unevenness backend server performance.

upstream test {
    server localhost:8080 weight=9;
    server localhost:8081 weight=1;
}
server {
    listen       80;                                                         
    server_name  localhost;                                               
    client_max_body_size 1024M;
 
    location / {
        proxy_pass http://test;
        proxy_set_header Host $host:$server_port;
    }
}

2.1.3 、ip_hash

The above two methods have a problem that the next time a request to the request may be distributed to another server, when the program is not stateless (such as the server uses to save session data), this time there is a great very problem. For example, to save the login information to the session, then jump to another server when you need to log in again, so many times we only need a client access to a server, then you need to use the ip_hash, each request by ip_hash hash result assign access ip so that each visitor to access a fixed back-end server, can solve the problem of session.

upstream test {
    ip_hash;
    server localhost:8080;
    server localhost:8081;
}
server {
    listen       80;                                                         
    server_name  localhost;                                               
    client_max_body_size 1024M;
 
    location / {
        proxy_pass http://test;
        proxy_set_header Host $host:$server_port;
    }
}

3, cross-domain issues

web development in the field, frequently used front and rear ends separation mode. In this mode, the front and rear ends are each independently a web application is a Java program such as a rear end, a front end or a React Vue applications. Separate web app when visiting each other, there is bound to cross-domain problems. Solve the problem of cross-domain There are two general ideas:

CORS: Set HTTP response header at the back-end server, you need to run the domain name is to accede to Access-Control-Allow-Origin in.
jsonp: the rear end upon request, json configuration data, and returns, with the distal end cross-domain jsonp.
Nginx for CORS provides a cross-domain solutions such as front-end back-end cross-domain presence, front and rear If you use http interact, the request will be rejected. This time can be set by a background service response headers settlement.

server {
    listen 8080;
 
    server_name localhost;
 
	location / {
	
	    add_header 'Access-Control-Allow-Origin' '$http_origin';
	    add_header 'Access-Control-Allow-Credentials' 'true';
	    add_header 'Access-Control-Allow-Methods' 'POST, GET, OPTIONS, DELETE, PUT';
	    add_header 'Access-Control-Allow-Headers' 'DNT,X-Mx-ReqToken,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type';
	
	    #跨域OPTIONS请求,set response header后直接204返回
	    if ($request_method = 'OPTIONS') {
	            return 204;
	    }
	
	    proxy_set_header X-Real-IP $remote_addr;
	    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
	
	    proxy_pass http://test;
	}
}
Released five original articles · won praise 1 · views 109

Guess you like

Origin blog.csdn.net/licong1994/article/details/103936241