To learn nginx, it is enough to read this article: download and install.

1. Introduction to nginx

  1. What is nginx and what can it do?
    Nginx is a high-performance HTTP and reverse proxy web server. It is very powerful in handling high concurrency and can withstand high load tests. Some reports show that it can support up to 50,000 concurrent connections.

Its characteristic is that it occupies less memory and has strong concurrency capability. In fact, the concurrency capability of nginx does perform better among similar web servers. Users of nginx websites in mainland China include: Baidu, JD.com, Sina, NetEase, Tencent, Taobao, etc.

2. Nginx as a web server
Nginx can be used as a web server for static pages, and it also supports dynamic languages ​​of the CGI protocol, such as perl and php. But java is not supported. Java programs can only be completed by cooperating with tomcat. Nginx is specially developed for performance optimization. Performance is its most important consideration. The implementation is very efficient and can withstand the test of high load. According to reports, it can support up to 50,000 concurrent connections.
https://lnmp.org/nginx.html
3. Forward proxy
Nginx can not only act as a reverse proxy to achieve load balancing. It can also be used as a forward proxy to perform functions such as surfing the Internet. Forward proxy: If the Internet outside the LAN is imagined as a huge resource pool, the client in the LAN needs to access the Internet through a proxy server. This kind of proxy service is called forward proxy.

Simple: the process of accessing the server through a proxy server is called a forward proxy.
It is necessary to configure a proxy server on the client to access the specified website .
4. Reverse proxy
Reverse proxy, in fact, the client is not aware of the proxy, because the client can access without any configuration.
We only need to send the request to the reverse proxy server. After the reverse proxy server selects the target server to obtain the data, it returns to the client. At this time, the reverse proxy server and the target server are a server externally, and the proxy server is exposed address, which hides the real server IP address.
5. Load balancing
increases the number of servers, and then distributes the requests to each server, changing the original request to a single server to distribute the requests to multiple servers, and distribute the load to different servers, that is, our so-called load balancing

The client sends multiple requests to the server, and the server processes the requests, some of which may need to interact with the database. After the server finishes processing, it returns the results to the client.

   这种架构模式对于早期的系统相对单一,并发请求相对较少的情况下是比较适合的,成 本也低。但是随着信息数量的不断增长,访问量和数据量的飞速增长,以及系统业务的复杂 度增加,这种架构会造成服务器相应客户端的请求日益缓慢,并发量特别大的时候,还容易 造成服务器直接崩溃。很明显这是由于服务器性能的瓶颈造成的问题,那么如何解决这种情 况呢?

   我们首先想到的可能是升级服务器的配置,比如提高 CPU 执行频率,加大内存等提高机 器的物理性能来解决此问题,但是我们知道摩尔定律的日益失效,硬件的性能提升已经不能 满足日益提升的需求了。最明显的一个例子,天猫双十一当天,某个热销商品的瞬时访问量 是极其庞大的,那么类似上面的系统架构,将机器都增加到现有的顶级物理配置,都是不能 够满足需求的。那么怎么办呢?上面的分析我们去掉了增加服务器物理配置来解决问题的办法,也就是说纵向解决问题 的办法行不通了,那么横向增加服务器的数量呢?这时候集群的概念产生了,单个服务器解 决不了,我们增加服务器的数量,然后将请求分发到各个服务器上,将原先请求集中到单个服务器上的情况改为将请求分发到多个服务器上,将负载分发到不同的服务器,也就是我们 所说的负载均衡

6. Separation of static and dynamic

In order to speed up the parsing speed of the website, dynamic pages and static pages can be parsed by different servers to speed up the parsing speed. Reduce the pressure on the original single server.

2. Nginx installation (Linux: centos as an example)
I have prepared all the packages used during nginx installation, which is convenient to use:
https://download.csdn.net/download/qq_40036754/11891855
I originally wanted to put Baidu Cloud Yes, but it’s troublesome, so I uploaded it directly to my resources, and you can also contact me directly, and I will give it to you directly.

  1. Preparations
    Open the virtual machine and use finalshell to connect to the Linux operating system
    Go to nginx to download the software
    http://nginx.org/

Install its dependent software first, and finally install nginx.
Dependent tools: pcre-8.3.7.tar.gz, openssl-1.0.1t.tar.gz, zlib-1.2.8.tar.gz, nginx-1.11.1.tar.gz. I also provide it here.
The http module of nginx uses pcre to parse regular expressions. It is necessary to install the pcre library on linux.
nginx uses zlib to gzip the content of the http package. It is necessary to install the zlib library on linux and install
the openssl library so that nginx supports https (that is, in ssl HTTP over the protocol

  1. There are two ways to start the installation
    , one is to download directly, and the other is to use the decompression package. Most of the decompression packages are used here.
    My installation path: /usr/feng/ For
    Mac system installation, please go here (it is not much different from Linux installation): Mac os installation nginx tutorial (success)
    Install pcre
    Method 1, wget http://downloads.sourceforge. net/project/pcre/pcre/8.37/pcre-8.37.tar.gz.
    Fang 12. Upload the compressed source code package, decompress, compile, and install the trilogy.
    1), decompress the file, enter the pcre directory,
    2), after ./configure is completed,
    3), execute the command: make && make install
    install openssl
    download OpenSSL address:
    http://distfiles.macports.org/openssl/
    1) , Decompress the file, return to the openssl directory,
    2), after ./configure is completed,
    3), execute the command: make && make install install
    zlib
    1), decompress the file, return to the zlib directory,
    2), ./configure is completed After that,
    3), execute the command: make && make install
    **Install nginx**
    1), decompress the file, return to the nginx directory,
    2) After ./configure is completed,
    3) Execute the command: make && make install
  2. After running nginx
    and installing nginx, the nginx folder will be automatically generated under the path /usr/local. This is automatically generated.
    Enter this directory:
cd /usr/local/nginx

The content of the directory is as follows:

  • Enter the sbin folder, there are two files: nginxand nginx.old.
  • Execution command: ./nginx can be executed
  • Test start: ps -ef | grep nginx

  • Already started.
  • Check the default port of nginx (the default is 80), and use the form of webpage to test, (like Tomcat.)
  • Enter the directory to view the port: cd /usr/local/nginx/conf under the nginx.conf file . This file is also the nginx configuration file. Under vim:

Enter the IP address: 80, it will display:

4. Firewall issues

To access nginx in linux in windows system, it cannot be accessed by default because of firewall problems (1) Close the firewall (2) Open the port number for access, port 80

View open port numbers

firewall-cmd --list-all 

Set the open port number

firewall-cmd --add-service=http –permanent 

 firewall-cmd --add-port=80/tcp --permanent

restart firewall

firewall-cmd –reload 

3. Common commands and configuration files of Nginx

1. Nginx common commands

a. Prerequisites for using nginx to operate commands

The prerequisite for using nginx operation commands: you must enter the /sbin folder under the automatically generated directory of nginx .
Nginx has two directories:
the first one : the installation directory, I put it in:

/usr/opt/

The second : Automatically generate the directory:

/usr/local/nginx/

b. View the version number of nginx

./nginx -v

c. Start nginx

./nginx

d. Close nginx

./nginx -s stop

e. Reload nginx

Execute the command under the directory: /usr/local/nginx/sbin , without restarting the server, it will be compiled automatically.

./nginx -s reload

2. Nginx configuration file

a. Configuration file location

/usr/local/nginx/conf/nginx.conf

b. Components of nginx

There are a lot of # in the configuration file, the ones at the beginning indicate the comment content, we remove all the paragraphs beginning with #, the simplified content is as follows:

worker_processes  1;

events {
    
    
    worker_connections  1024;
}

http {
    
    
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

    server {
    
    
        listen       80;
        server_name  localhost;

        location / {
    
    
            root   html;
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
    
    
            root   html;
        }
    }
}

  • The nginx configuration file consists of three parts

Part 1: Global Blocks

From the configuration file to the content between the events block, some configuration instructions that affect the overall operation of the nginx server will be set, mainly including configuring the user (group) running the Nginx server, the number of worker processes allowed to be generated, the process PID storage path, and the log Storage path and type, import of configuration files, etc.
For example, the configuration in the first line above:

  worker_processes  1;

This is the key configuration of the concurrent processing service of the Nginx server. The larger the value of worker_processes, the more concurrent processing it can support, but it will be restricted by hardware, software and other devices.

The second part: the events block

For example, the above configuration

events {
    
    
    worker_connections  1024;
}

The instructions involved in the events block
mainly affect the network connection between the Nginx server and the user. Commonly used settings include whether to enable serialization of network connections under multiple work processes, whether to allow multiple network connections to be received at the same time, and which event-driven model to choose for processing Connection requests, the maximum number of connections that each word process can support at the same time, etc. **
The above example indicates that the maximum number of connections supported by each work process is 1024.
This part of the configuration has a great impact on the performance of Nginx, and should be flexibly configured in practice.

the third part:

http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;

server {
    
    
    listen       80;
    server_name  localhost;

    location / {
    
    
        root   html;
        index  index.html index.htm;
    }
    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
    
    
        root   html;
    }
}

This is the most frequent part of the Nginx server configuration, where most functions such as proxy, cache and log definitions and third-party modules are configured.

It should be noted that: http block can also include http global block and server block.

http global block
http global block configuration instructions include file import, MIME-TYPE definition, log customization, connection timeout, maximum number of single link requests, etc.
server block
This block is closely related to the virtual host. From the perspective of the user, the virtual host is exactly the same as an independent hardware host. This technology was created to save the cost of Internet server hardware.
Each http block can include multiple server blocks, and each server block is equivalent to a virtual host.
Each server block is also divided into global server blocks, and can contain multiple locaton blocks at the same time.
The most common configuration of the global server block
is the monitoring configuration of the virtual machine host and the name or IP configuration of the virtual host.
location block
One server block can configure multiple location blocks.
The main function of this block is based on the request string (such as server_name/uri-string) received by the Nginx server, and the string other than the virtual host name (or IP alias) (such as the previous /uri-string) Match to process a specific request. Functions such as address orientation, data caching, and response control, as well as the configuration of many third-party modules are also performed here.

Guess you like

Origin blog.csdn.net/weixin_52859229/article/details/129697164