Study notes: Proxy server-Nginx


1. Introduction to Nginx

1. What is Nginx

  Nginx ("engine Nginx is specially developed for performance optimization. Performance is its most important consideration. In terms of strength, it focuses on efficiency and can withstand the test of high load. Reports indicate that Nginx can support up to 50,000 concurrent connections.

2. Reverse proxy

Nginx can not only act as a reverse proxy to achieve load balancing. It can also be used as a forward proxy for Internet access and other functions.

(1) Forward proxy
Forward proxy: If you imagine the Internet of the LAN as a huge resource library, then if the client in the LAN wants to access the Internet, it needs to access it through a proxy server. This kind of proxy service is called forward acting.Configure the proxy service on the client (browser) to access the Internet through the proxy server
Insert image description here

(2) Reverse proxy
Reverse proxy: In fact, the client is unaware of the proxy becauseThe client does not require any configuration to access, we only need to send the request to the reverse proxy server, and the reverse proxy server selects the target server to obtain the data, and then returns it to the client. At this time, the reverse proxy server and the target server are one server to the outside world, and the proxy is exposed Server address, which hides the real server IP address.
Insert image description here
(3) The difference between the two.
The difference between forward proxy and reverse proxy is that the objects of the proxy are different. The object of the forward proxy is the client, and the object of the reverse proxy is the server.
A forward proxy is a client proxy, acting as a proxy for the client, and the server does not know the client that actually initiated the request; a
reverse proxy is a server proxy, acting as a proxy for the server, and the client does not know the server that actually provides the service.
Insert image description here

3. Load balancing

  The client sends multiple requests to the server, and the server processes the requests. Some of them may need to interact with the database. After the server completes processing, it returns the results to the client.
  This architecture model is more suitable for early systems that are relatively simple and have relatively few concurrent requests, and the cost is also low. However, as the amount of access and data increases rapidly, and the complexity of system business increases, this architecture will cause the server to respond increasingly slowly to client requests. When the amount of concurrency is particularly large, it can easily cause the server to crash directly. Obviously this is a problem caused by a bottleneck in server performance, so how to solve this situation?
  Our first thought may be to upgrade the server configuration, such as increasing the CPU execution frequency, increasing the memory, etc. to improve the physical performance of the server to solve this problem. However, we know that Moore's Law is increasingly invalid, and the performance improvement of hardware can no longer meet the increasing needs. . The most obvious example is that on the day of Tmall’s Double Eleven, the instantaneous visits to a certain hot-selling product were extremely huge. Therefore, it is impossible to add machines to the existing top-level physical configurations similar to the system architecture above. If the demand is met, what should we do?
  In the above analysis, we have removed the method of increasing the physical configuration of servers to solve the problem. In other words, the vertical method of solving the problem will not work. What about increasing the number of servers horizontally? At this time, the concept of cluster came into being.A single server cannot solve the problem. We increase the number of servers and then distribute the requests to each server. Instead of concentrating the original requests on a single server, we distribute the requests to multiple servers and distribute the load to different servers. This is what we call load balancing .
Insert image description here

4. Separation of movement and static

Original deployment method: Tomcat is under great pressure.
Insert image description hereDynamic and static separation: In order to speed up the parsing speed of the website, dynamic pages and static pages can be parsed by different servers to speed up the parsing speed. Reduce the pressure on a single server.
Insert image description here


2. Basic use of Nginx

1. Commonly used operating commands of Nginx

  • Prerequisites for using Nginx operation commands: You must enter the nginx/sbin directory
    /usr/local/nginx/sbin
  • Check the version number of Nginx: ./nginx -v
  • Start Nginx: ./nginx
  • Shut down Nginx: ./nginx -s stop
  • Reload the Nginx configuration file: ./nginx -s reload

2. Nginx configuration file

(1) Nginx configuration file:/nginx/conf/nginx.conf
Insert image description here(2) Composition of Nginx configuration file

The Nginx configuration file consists of three parts:

Part One: Global Blocks
Insert image description here
From the beginning of the configuration file to the events block, you will mainly set some configuration instructions that affect the overall operation of the Nginx server, including configuring the user (group) running the Nginx server, the number of worker processes allowed to be generated, the process PID storage path, and logs. Storage path and type and the introduction of configuration files, etc.
For example, the first line configured above: worker_processes 1, which is the key configuration of the Nginx server concurrent processing service. The larger the worker_processes value, the more concurrent processing it can support, but it will be restricted by hardware, software and other equipment.

Part 2: events block

  The instructions designed in the events block mainly affect the network connection between the Nginx server and the user. Common settings include whether to enable serialization of network connections under multiple work processes, whether to allow multiple network connections to be received at the same time, and which event-driven model to choose for processing. Connection requests, the maximum number of connections that each word process can support simultaneously, etc.
Insert image description here
  The example in the above figure identifies the maximum number of connections supported by each work process as 1024. This part of the configuration has a greater impact on the performance of Nginx and should be configured flexibly in practice.

Part 3: http block
Insert image description here
This module is the most frequent part of the Nginx server configuration. Most functions such as proxy, cache and log definitions and the configuration of third-party modules are here. have to be aware of is:http blocks can also include http global blocks and server blocks

① http global block
http global block configuration instructions include file introduction, MIME-TYPE definition, log customization, connection timeout, upper limit of single link requests, etc.

② server block
This is closely related to the virtual host. From the perspective of the virtual host, it is exactly the same as an independent hardware host. This technology was created to save Internet server hardware costs.
Each http block can include multiple server blocks, and each server block is equivalent to a virtual host.
Each server block is also divided into global server blocks, and can contain multiple location blocks at the same time.


  •   The most common configuration of the global server block is the listening configuration of this virtual host and the name or IP configuration of this virtual host.
  • location block
      A server block can be configured with multiple location blocks.
      The main function of this block is to process strings other than the virtual host name (which can also be an IP alias) (such as the previous /uri-string) based on the request string (such as server_name/uri-string) received by the Nginx server. Match and process specific requests. Functions such as address directing, data caching and response control, as well as the configuration of many third-party modules are also performed here.

Guess you like

Origin blog.csdn.net/qq_45867699/article/details/117023064