nginx reverse proxy, nginx configuration instructions

nginx reverse proxy, nginx configuration description
1. nginx is a reverse proxy server, which receives user requests, forwards the request to the application server, receives the result sent back by the application server, and sends the result back to the requesting user.
2. nginx is done by the master process and multiple worker processes.
3. The master process mainly acts as the interaction interface between the entire process group and the user (that is, receives the client's request), and monitors the process at the same time (distributes the work to multiple workers reasonably).
4. The worker process accepts the command issued by the master process, completes the task of the command and tells (actual callback) the master process to complete the subsequent work (return to the user request).



Reverse
proxy (Reverse Proxy) means that the proxy server accepts the connection request on the Internet, and then forwards the request to the server on the internal network; and returns the result obtained from the server to the requesting connection on the Internet. At this time, the proxy server acts as a server (content server) to the outside world.

1. As a substitute for content server
(1). When it is necessary to protect the content of the application server, ensure that the content of the application server will not be directly accessed by the external network, and the application server located in the firewall can only be accessed by the proxy server.
(2). The proxy server is located outside the firewall and can be accessed by external network users. Users accessing the content of the application server must pass through the proxy server to obtain the content.
(3). This is beneficial to protect the content of the application server and prevent the external network from attacking the application service item server, because only the proxy server has the right to access the application server.

2. Load balancer as a content server
(1). The user sends the request to the proxy server, and the proxy server forwards the request to one of the following servers according to a certain algorithm (there are multiple servers that process the same business (the running code is the same)) to handle the request.
(2). The application server returns the result to the proxy server after processing, and the proxy server returns it to the user.
(3). The proxy server knows the running status of the application server behind through a certain algorithm, and decides who to send the request to for processing through these conditions, so as to achieve the purpose of load balancing.


Nginx configuration file
The Nginx configuration file is mainly divided into four parts:
1.main (global settings),
2.server (host settings),
3.upstream (upstream server settings, mainly reverse proxy, load balancing related configuration) and
4.location (The setting after the URL matches a specific location),


nginx configuration description

nginx.conf configuration file description

#//----------------------------------------------------------------------------
#//Global Settings
#//1. Determine how many processes to run
#//2. Error log output directory
#//3. Process ID storage directory
#//4. Data exchange mode (generally choose epoll to improve performance)
#//5. How many threads each worker process runs at most (maximum concurrency)
#//----------------------------------------------------------------------------
#//1. Running user
user www-data;

#//1. Start the process, usually set to be equal to the number of CPUs, each CPU runs one process, reducing process switching and improving performance.
worker_processes  1;

#//1. Global error log storage location setting, only the error level log is stored
error_log  logs/error.log;

#//1.PID file storage location setting, which stores the process ID of nginx, you don't need to know the process number when closing, you can use kill –QUIT `cat /data/logs/nginx.pid`
pid        logs/nginx.pid;

#//1. Change the maximum number of open files for the worker process. If not set, this value is an operating system limit.
#//2. "ulimit -a" to view the size of this value
# // worker_rlimit_nofile 100000;

#//Working mode (data exchange mode) and the upper limit of the number of connections (maximum concurrency per worker process)
events {
	#//1.epoll is a way of multiplexing IO (I/O Multiplexing), but it is only used for kernels above linux 2.6, which can greatly improve the performance of nginx
	use   epoll;

	#//1. The maximum number of concurrent connections for a single background worker process process, which controls the number of threads. In theory, the maximum number of connections per nginx server is worker_processes*worker_connections
	#//2. The maximum number of clients is also limited by the number of socket connections available to the system (64K), so setting it to an unrealistically high value is of no benefit.
	#//3.1024 is the maximum number of files opened by a Linux process, so set it to 1024
	#//4. Of course, this limit can also be changed. If the worker_rlimit_nofile mentioned above is set, we can set this value very high.
	worker_connections  1024;

	#//1. The default is on. When set to on, multiple workers process connections in a serial manner, that is, only one worker is awakened for a connection, and the others are in a sleep state.
	#//2. After setting to off, multiple workers process connections in parallel, that is, a connection will wake up all workers until the connection is allocated, and those who do not get a connection will go back to sleep.
	#//3. When your server has a small number of connections, enabling this parameter will reduce the load to a certain extent. But when the throughput of the server is large, for efficiency, please turn off this parameter.
	#//4. It is a question of whether to wake up all the worker processes or only one of the processes when connected
    	# multi_accept on;

	#//1. Specify the cache for open files, it is not enabled by default, max specifies the maximum number of caches, it is recommended to be consistent with the number of open files, inactive refers to how long the file has not been requested to delete the cache
	#//2. The maximum number of open files is the worker_rlimit_nofile parameter that we configure in main
	#//3. The file metadata information of the modification time and size of the file is cached. When the client makes a file request, the modification time of the client's local file will be sent to nginx, and nginx will then compare the modification time.
	#//To decide whether the local file of the client should be updated.
	# open_file_cache max=2000 inactive=60s;
	#//1. This is valid information to specify how often to check the cache. It is to check whether the file of nginx has changed in the server, and if there is a change, update the file meta information to the cache
	# open_file_cache_valid 60s;
	#//1. If the file cache data is not accessed within the open_file_cache_valid time, it will be cleared from the cache
	# open_file_cache_min_uses 1;

}

#//Set the http server and use its reverse proxy function to provide load balancing support
#//Configuration parameters for http requests
http {
	#//1. Set the mime type, the type is defined by the mime.type file, which is the data type supported by the request
	#//2. The server selects different web page data types according to the suffix name of the file name and returns it to the client, and the client parses the data according to the corresponding type
	include       /etc/nginx/mime.types;
	#//1. Represents arbitrary binary data transfers.
	default_type  application/octet-stream;

	#//Set the log format
	access_log    /var/log/nginx/access.log;

	#//1. Enable efficient file transfer mode. The sendfile instruction specifies whether nginx calls the sendfile function to output files, reducing the context switching from user space to kernel space. Set to on for normal applications,
	#//2. If it is used for applications with heavy disk IO load such as downloading, it can be set to off to balance the processing speed of disk and network I/O and reduce the load of the system.
	sendfile        on;

	#//1.on tells nginx to send all headers in one packet instead of one by one.
	When #//2.sendfile is on, it should also be set to on, and the data packets will be accumulated and transmitted together, which can improve some transmission efficiency.
 	#tcp_nopush     on;

	#//1.on does not cache data.
	#//2. Before that, let's talk about the nagle cache algorithm. Some applications will send a few bytes during network communication, such as one byte.
	#//In addition to the TCP protocol itself, it actually takes 41 bytes to send, which is very inefficient. At this time, the nagle algorithm came into being, it will
	#//The data to be sent is stored in the cache, and when a certain amount or time is accumulated, they are sent out.
	#//3. It seems to be the opposite function of tcp_nopush, but nginx can also balance the use of these two functions when both sides are on.
 	#tcp_nodelay	 on;

	#//1. If the setting time is too long and there are many users, keeping the connection for a long time will take up a lot of resources.
    	#//2. Duration of HTTP connection. Setting it too long will cause too many useless threads.
    	#//3. The server still maintains the connection with the client after completing an http response to reduce the time for re-establishing the connection, but keeping the connection for too long may affect the performance (the number of connections is large)
    	#//keepalive_timeout  0;
    	keepalive_timeout  65;

	#//1. Set the timeout time for reading the client request header. Nginx cannot receive the request header within this time. Nginx will return 408 (Request Time-out), and Nginx will close the connection.
    	#client_header_tomeout 60;

    	#//1. Set the timeout for reading the client request body, the longest time limit between nginx receiving the request body, Nginx will return 408 (Request Time-out), and Nginx will close the connection.
    	#client_body_timeout 60;

    	#//1. Used to specify the timeout period for responding to the client. This timeout is limited to the time between two connection activities. If this time is exceeded and the client does not receive data (that is, no response), Nginx will close the connection.
    	#send_timeout 60;

    	#//1. Turn on gzip compression
    	gzip  on;
    	#//1.IE1-6 is not very friendly to Gzip, don't give it Gzip anymore
	gzip_disable "MSIE [1-6]\.(?!.*SV1)";

    	#//1. Set the request buffer, the buffer size of the client request header, this can be set according to your system paging size, generally the header size of a request will not exceed 1k,
    	#//2. Common system paging size is generally 4k
    	client_header_buffer_size    1k;
    	#//1.nginx will use the client_header_buffer_size buffer to read (store) the header value by default. If the header is too large, it will use large_client_header_buffers to read (store)
    	#//2. Up to 4 4K, not big enough, nginx will directly return a 400 error
    	large_client_header_buffers  4 4k;

    	#//1. Same as client_header_buffer_size above, but processing the message body
    	client_body_buffer_size 512k;
    	#//1. Same as the above large_client_header_buffers If your application request body is too large, such as a large file upload, if the setting here is not enough, nginx 413 request entity too large will appear
    	#//2. The maximum number of bytes in a single file requested by the client. If uploading larger files, please set its limit value
    	client_max_body_size 300m;

	#//1.nginx default configuration file, include can introduce external configuration files for related configuration, not limited to completing in one configuration file
	#//2. Include other configuration information
	include /etc/nginx/conf.d/*.conf;
	include /etc/nginx/sites-enabled/*///;

	#//1. Load balancing server group settings
	#//2. Define the IP group of the back-end application server and the definition of the load balancing method of this group of application servers
	#//3.nginx supports setting multiple groups of load balancing at the same time for use by unused servers.
	#//4. Load balancing configuration for HTTP requests
	#//5.mysvr, is the name of the load balancer, you can use "proxy_pass http://myServer;" in the location to point the request to myServer
	#//6. When there are no following parameters, each request is allocated to different back-end servers one by one in chronological order. If the back-end server is down, it can be automatically eliminated
	#//7.weight (weight), specifies the polling probability, the weight is proportional to the access ratio, and is used in the case of uneven performance of the backend server.
	#//8.ip_hash (separate line), each request is allocated according to the hash result of accessing the ip, so that each visitor can access a back-end server fixedly, which can solve the problem of session.
	#//9.fair (single line), allocate requests according to the response time of the backend server, and give priority to those with short response time.
	#//10.url_hash,
	upstream mysvr {
		#//1.weigth parameter indicates the weight, the higher the weight, the greater the probability of being assigned
		server 192.168.8.1:3128 weight=5;
		server 192.168.8.2:80  weight=1;
		server 192.168.8.3:80  weight=6;
		#//1.max_fails, the default is 1. The number of times a server allows requests to fail. After the maximum number of times is exceeded, within the fail_timeout time,
		#//nginx will think that this server is temporarily unavailable and will not assign requests to it
		#//2.fail_timeout,
		#//server 192.168.0.100:8080 weight=2 max_fails=3 fail_timeout=15;
		#//1.backup, backup machine, it will take effect after all servers are hung up
		#//2.down, indicates that a server is unavailable.
		#//3.max_conns, limit the maximum number of connections allocated to a server for processing, beyond this number, no new connections will be allocated to it. The default is 0, which means no limit.
		#//server 192.168.0.102:8080 backup/down/max_conns=1000;

	}

	#//1. Configure the virtual machine (that is, the relevant parameters of the Nginx server), there can be multiple servers, that is, a process that listens to a certain port. The server defines the definition of the processing method of the location for related access addresses.
	#//2. Generally only one is opened to improve performance
	#//3. It is to establish an access address of this Nginx server, a listening port, and how to assign different URIs to different upstream (backend servers) configurations, or perform different processing
	#//4.http service supports several virtual hosts. Each virtual host corresponds to a server configuration item, which contains the configuration related to the virtual host.
	#//5. Each server is distinguished by listening address or port.
	server {
		#//1. Listen on port 80
		listen       80;

		#//1. Define the use of www.xx.com to access, configure the access domain name
		#//2. Server names, such as localhost, www.example.com, can be matched by regular patterns.
		#//3. Can be multiple written together
		The #//4.server_name configuration can also filter someone maliciously pointing some domain names to your host server.
		server_name  www.xx.com;

 		#//1. Set the log directory of this virtual host
 		#//2.main, a log_format (log format) in the default configuration file
 		#//3. If you do not want to enable the log, use: access_log off ;
 		#//4. This is the log configuration of this virtual machine (server)
		access_log  logs/www.xx.com.access.log  main;

		#//1.location, match the URI to a different upstream (backend server), or perform different processing
		#//2.location module, used for pattern matching
		#//3. Syntax rules: location [=|~|~*|^~] /uri/ { … }
		#//(1).= starts with an exact match
		The beginning of #//(2).^~ means that the uri starts with a regular string, which can be understood as matching the url path.
		#//nginx does not encode the url, so the request is /static/20%/aa, which can be matched by the rule ^~ /static/ /aa (note the space).
		#//(3).~ starts with a case-sensitive regular match
		The beginning of #//(4).~* indicates case-insensitive regular matching
		#//(5)./ Universal match, any request will match.
		#//(6). First matches =, second matches ^~, second is regular matching according to the order in the file, and finally is handed over to / general matching.
		#//(7). Empty match, "location /poechant"
		#//(8).(location =) > (location full path>) >(location ^~ path) >(location ~* regular) >(location path)
		#//As long as it matches, the others will be ignored, and then return to the change match.
		#//When there is a successful match, stop matching and process the request according to the current matching rules.
		#//4. After Nginx receives the HTTP request, it forwards it to the following application server URL for processing, and obtains the result returned by the application server. Then return the client's request
		location / {
			#//1. Define the default website root directory location of the server, which is the root directory setting that conforms to this matching pattern
			#//2.root, "root /var/www/image;", when accessing files in the /img/ directory, nginx will go to the /var/www/image/img/ directory to find files,
			#//3.alias,"alias /var/www/image/;", ningx will automatically go to the /var/www/image/ directory to find files. The alias must be ended with "/", root is optional
			root   /root;

			#//1. Define the name of the home page index file
			#//2. Define the default access file name under the path
			#//3. When visiting this URL, return the content of that page to the client
			index index.php index.html index.htm;

			#//1. Used to send the request to the FashCGI server through FashCGI for processing
			#//fastcgi_pass  www.xx.com;
			#//fastcgi_param  SCRIPT_FILENAME  $document_root/$fastcgi_script_name;
			#//include /etc/nginx/fastcgi_params;

			#//1. The address of the reverse proxy, backend is the name of the upstream (load balancing server group)
			#//2. Don't have "/" at the back, if there is, you can find it in your root directory, such as: access url = http://server/test/test.jsp, (location is "/test/ "), no With "/" for access
			#//"http://backend/test/test.jsp", bring it to visit "http://backend/test.jsp",
			#//3. Used to pass the request to the application server through proxy_pass for processing
			#//proxy_pass http://backend;

			#//1. Set the host header and the real address of the client so that the server can obtain the real IP of the client
			#//2.X-Forwarded-For, the field indicates who initiated the http request
			#//3.proxy_set_header, is to add attributes to the request header, so that the next level can know the relevant information of the previous level
             		#//proxy_set_header Host $host;
             		#//proxy_set_header X-Real-IP $remote_addr;
			#//proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

             		#//1.proxy_buffering, disable the cache of the receiving backend server
             		#//2.proxy_buffer_size, used to accept the first part of the response from the backend server, the small response header is usually located in this part of the response content.
             		#//Proxy_buffer_size works regardless of whether proxy_buffering is enabled or not.
             		#//3. If proxy_buffers is closed, Nginx will not try to get all the response data from the back-end server before returning it to the client. Nginx will transmit the data to the client as soon as possible.
             		#//Before the data is transmitted, the maximum buffer size received by Nginx cannot exceed proxy_buffer_size.
             		#//4. If proxy_buffers is turned on, Nginx will read the data of the backend server to the buffer as much as possible until all the buffers set by proxy_buffers are full or
             		#//The data is read (EOF), at this time Nginx starts to transmit data to the client, and will transmit the entire string of buffers at the same time. If the data is large, Nginx will receive it and put it
             		#//They are written to temp_file, and the size is controlled by proxy_max_temp_file_size.
             		#//5.proxy_busy_buffers_size, before the data is written to the data is transmitted to the client, the buffers will be locked, and will not be released until the data transmission is completed
			#//proxy_buffering off;
			#//proxy_buffer_size 4k;
			#//proxy_buffers 4 4k;
			#//proxy_busy_buffers_size 8k;
			#//proxy_max_temp_file_size 1024m;
		}

    		#//1. Define how to deal with related errors, error_page code ... [=[response]] uri;
    		#//2.error_page 500 502 503 504 /50x.html; returns the content of /50x.html to the client, and the client URL remains unchanged. nginx internal redirection
    		#//3.error_page 404 =200 /404.html; Returns the content of /404.html to the client, and the client URL remains unchanged. nginx is internally redirected, and the status code returned to the front end changes from 404 to 200
    		#//4.error_page 404 = /404.html; Returns the content of /404.html to the client, and the client URL remains unchanged. Internal redirection of nginx, the status code returned to the front end is equal to the return code of accessing /404.html
    		#//5.error_page 404 https://www.baidu.com/; Call the client to redirect to https://www.baidu.com/, and change the client URL. (Actually, 302 is returned and the client is redirected to a new URL)
		error_page 500 502 503 504 /50x.html;


		#//1. The following example goes directly to the file, so it goes to /root/ to find the file
		#//2. Go to /root/ to find the 50x.html file
		location = /50x.html {
			root   /root;
		}

    		#//1. Static files, nginx handles it by itself ("()" means multiple, "|" means yes or), ^/(images|javascript|js|css|flash|media|static)/table regular expression Mode
    		location ~ ^/(images|javascript|js|css|flash|media|static)/ {
			root /var/www/virtual/htdocs;

			#//1. You can set the expires expiration time in the server, control the browser cache, and achieve the purpose of effectively reducing bandwidth traffic and reducing server pressure.
			#//2.expires is the response header field of the web server, which tells the browser that the browser can directly fetch data from the browser cache before the expiration time without requesting again.
			expires 30d;
		}

		#//PHP script requests are all forwarded to FastCGI for processing. Use FastCGI default configuration.
		#//# All php suffixes are sent to port 9000 through fastcgi
		location ~ \.php$ {
			root /root;
		        fastcgi_pass 127.0.0.1:9000;
		        fastcgi_index index.php;
		        fastcgi_param SCRIPT_FILENAME /home/www/www$fastcgi_script_name;
		        include fastcgi_params;
    		}

		#//1. Set the address for viewing Nginx status
		location /NginxStatus {
			stub_status	on;
			access_log	on;
			auth_basic	"NginxStatus";
			auth_basic_user_file	conf/htpasswd;
		}

		#//1. Forbid access to .htxxx files
		#//2.deny all means deny all requests
		location ~ /\.ht {
			deny all;
		}
	}
}




Reference text (parameter configuration): http://www.cnblogs.com/xiaogangqq123/archive/2011/03/02/1969006.html
Reference text (parameter configuration): http://www.chinaz.com/web/2015 /0424/401323.shtml
Reference text (parameter configuration): http://blog.csdn.net/tjcyjd/article/details/50695922
Reference text (parameter configuration): http://www.ha97.com/5194.html
Reference text (parameter configuration): http://www.nginx.cn/76.html
Reference text (parameter configuration): http://blog.csdn.net/fbfsber008/article/details/7411547
Reference text (parameter configuration) : http://blog.csdn.net/xifeijian/article/details/20956605
Reference text (parameter configuration): http://www.cnblogs.com/justbio/p/5535466.html
Reference text (http module): http ://blog.csdn.net/zhangsheng_1992/article/details/51697947
Refer to the original text (upstream module): http://blog.csdn.net/zhangsheng_1992/article/details/51726873
Reference text (location module): http://blog.csdn.net/zhangsheng_1992/article/details/51727155
Reference text (nginx access.log ): http://blog.sina.com.cn/s/blog_9bd573450101iqdc.html
Reference text (server_name of Nginx): http://onlyzq.blog.51cto.com/1228/535279
Reference text (parameter configuration): https:/ /segmentfault.com/a/1190000002797601
Reference text (parameter configuration (English)): http://nginx.org/en/docs/http/ngx_http_core_module.html#open_file_cache
Reference text (Nginx basic functions Quick Start): http:/ /xxgblog.com/2015/05/17/nginx-start/Reference
text (proxy_buffering): http://www.ttlsa.com/nginx/nginx-proxy-buffer-explain/Reference
text (proxy_buffering): http:/ /blog.csdn.net/ikmb/article/details/7098080
Reference text (proxy_set_header): http://blog.csdn.net/bao19901210/article/details/52537279
Reference text (forward/reverse proxy): http://blog.csdn.net/wave_1102/article/details/ 44479175
Refer to the original text (the commonly used timeout configuration instructions for nginx): http://blog.csdn.net/liujiyong7/article/details/18228915

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326944082&siteId=291194637