[Nginx] Chapter 7 Nginx Principle and Optimization Parameter Configuration

7.1 Principle of Nginx

 

The benefits of the master-workers mechanism

First of all, for each worker process, an independent process does not need to be locked, so the overhead caused by the lock is saved, and at the same time, it is much more convenient when programming and finding problems.

Secondly, the use of independent processes can prevent each other from affecting each other. After one process exits, other processes are still working, and the service will not be interrupted. The master process will quickly start a new worker process.

Of course, if the worker process exits abnormally, there must be a bug in the program. Abnormal exit will cause all requests on the current worker to fail, but it will not affect all requests, so the risk is reduced.

How many workers need to be set

Similar to redis, Nginx adopts the io multiplexing mechanism . Each worker is an independent process, but there is only one main thread in each process, and it processes requests in an asynchronous and non-blocking manner, even if there are tens of thousands Requests are no problem. Each worker thread can maximize the performance of a CPU.

Therefore, it is most appropriate for the number of workers to be equal to the number of CPUs of the server . Setting less will waste the CPU, and setting too much will cause the loss caused by frequent switching contexts of the CPU.

#Set the number of workers

worker_processes 4

#work binds cpu (4 work binds 4cpu).

worker_cpu_affinity 0001 0010 0100 1000

#work binds cpu (4 work binds 4 of 8cpu).

worker_cpu_affinity 00000001 00000010 00000100 00001000  00010000  00100000 01000000  10000000

# number of connections

worker_connections 1024

This value indicates the maximum number of connections that each worker process can establish. Therefore, the maximum number of connections that can be established by an nginx should be worker_connections * worker_processes. Of course, what is mentioned here is the maximum number of connections. For HTTP requests to local resources, the maximum number of concurrency that can be supported is worker_connections * worker_processes. If a browser that supports http1.1 requires two connections for each visit, so ordinary The maximum concurrency of static access is: worker_connections * worker_processes /2 , and if HTTP is used as a reverse proxy, the maximum concurrency should be worker_connections * worker_processes/4 .

Because as a reverse proxy server, each concurrency will establish a connection with the client and a connection with the backend service, which will occupy two connections.

Interview questions:

The first one: Sending a request, how many connections are occupied by woker?

The second one: nginx has a master and four wokers. Each woker supports a maximum number of connections of 1024. What is the maximum number of concurrency supported?

7.2 Detailed explanation of Nginx.conf configuration

#Security issues, it is recommended to use nobody instead of root.

#user  nobody;

#The number of workers equal to the number of CPUs of the server is the most appropriate

worker_processes  2;

#work binding cpu (4 work binding 4cpu)

worker_cpu_affinity 0001 0010 0100 1000

#work binds cpu (4 work binds 4 of 8cpu).

worker_cpu_affinity 0000001 00000010 00000100 00001000  

#error_log path (storage path) level (log level) path indicates the log path, level indicates the log level,

#The details are as follows: [ debug  | info | notice |  warn | error  | crit ]

#From left to right, the log detail level decreases step by step, that is, debug is the most detailed, crit is the least, and the default is crit .

#error_log  logs/error.log;

#error_log  logs/error.log  notice;

#error_log  logs/error.log  info;

#pid        logs/nginx.pid;

events {

    #This value indicates the maximum number of connections that each worker process can establish. Therefore, the maximum number of connections that can be established by an nginx should be worker_connections * worker_processes.

    #Of course, here is the maximum number of connections. For HTTP requests to local resources, the maximum number of concurrency that can be supported is worker_connections * worker_processes,

    #If it is a browser that supports http1.1, two connections are required for each visit,

    #So the maximum concurrency of ordinary static access is: worker_connections * worker_processes /2 ,

    #And if HTTP is used as a reverse proxy, the maximum number of concurrency should be worker_connections * worker_processes/4 .

    #Because as a reverse proxy server, each concurrency will establish a connection with the client and a connection with the backend service, which will occupy two connections.

    worker_connections  1024;  

    #这个值是表示nginx要支持哪种多路io复用

    #一般的Linux选择epoll, 如果是(*BSD)系列的Linux使用kquene。

    #windows版本的nginx不支持多路IO复用,这个值不用配。

    use epoll;

    # 当一个worker抢占到一个链接时,是否尽可能的让其获得更多的连接,默认是off 。

    multi_accept on; //并发量大时缓解客户端等待时间。

    # 默认是on ,开启nginx的抢占锁机制。

    accept_mutex  off; //master指派worker抢占锁

}

http {

    #当web服务器收到静态的资源文件请求时,依据请求文件的后缀名在服务器的MIME配置文件中找到对应的MIME Type,再根据MIME Type设置HTTP Response的Content-Type,然后浏览器根据Content-Type的值处理文件。

    include       mime.types;  #/usr/local/nginx/conf/mime.types

    #如果 不能从mime.types找到映射的话,用以下作为默认值-二进制

    default_type  application/octet-stream;

     #日志位置

     access_log  logs/host.access.log  main;

     #一条典型的accesslog:

     #101.226.166.254 - - [21/Oct/2013:20:34:28 +0800] "GET /movie_cat.php?year=2013 HTTP/1.1" 200 5209 "http://www.baidu.com" "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; MDDR; .NET4.0C; .NET4.0E; .NET CLR 1.1.4322; Tablet PC 2.0); 360Spider"

     #1)101.226.166.254:(用户IP)

     #2)[21/Oct/2013:20:34:28 +0800]:(访问时间)

     #3)GET:http请求方式,有GET和POST两种

     #4)/movie_cat.php?year=2013:当前访问的网页是动态网页,movie_cat.php即请求的后台接口,year=2013为具体接口的参数

     #5)200:服务状态,200表示正常,常见的还有,301永久重定向、4XX表示请求出错、5XX服务器内部错误

     #6)5209:传送字节数为5209,单位为byte

     #7)"http://www.baidu.com":refer:即当前页面的上一个网页

     #8)"Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; #.NET CLR 3.0.30729; Media Center PC 6.0; MDDR; .NET4.0C; .NET4.0E; .NET CLR 1.1.4322; Tablet PC 2.0); 360Spider": agent字段:通常用来记录操作系统、浏览器版本、浏览器内核等信息

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '

                       '$status $body_bytes_sent "$http_referer" '

                      '"$http_user_agent" "$http_x_forwarded_for"';

    #开启从磁盘直接到网络的文件传输,适用于有大文件上传下载的情况,提高IO效率。

    sendfile        on; //大文件传递优化,提高效率

   

    #一个请求完成之后还要保持连接多久, 默认为0,表示完成请求后直接关闭连接。

    #keepalive_timeout  0;

    keepalive_timeout  65;

    #开启或者关闭gzip模块

    #gzip  on ; //文件压缩,再传输,提高效率

    #设置允许压缩的页面最小字节数,页面字节数从header头中的Content-Length中进行获取。

    #gzip_min_lenth 1k;//超过该大小开始压缩,否则不用压缩

    # gzip压缩比,1 压缩比最小处理速度最快,9 压缩比最大但处理最慢(传输快但比较消耗cpu)

    #gzip_comp_level 4;

    #匹配MIME类型进行压缩,(无论是否指定)"text/html"类型总是会被压缩的。

    #gzip_types types text/plain text/css application/json  application/x-javascript text/xml   

    #动静分离

    #服务器端静态资源缓存,最大缓存到内存中的文件,不活跃期限

    open_file_cache max=655350 inactive=20s;   

   

    #活跃期限内最少使用的次数,否则视为不活跃。

    open_file_cache_min_uses 2;

    #验证缓存是否活跃的时间间隔

    open_file_cache_valid 30s;

    

upstream  myserver{

    # 1、轮询(默认)

    # 每个请求按时间顺序逐一分配到不同的后端服务器,如果后端服务器down掉,能自动剔除。

    # 2、指定权重

    # 指定轮询几率,weight和访问比率成正比,用于后端服务器性能不均的情况。

    #3、IP绑定 ip_hash

    # 每个请求按访问ip的hash结果分配,这样每个访客固定访问一个后端服务器,可以解决session的问题。

    #4、备机方式 backup

    # 正常情况不访问设定为backup的备机,只有当所有非备机全都宕机的情况下,服务才会进备机。当非备机启动后,自动切换到非备机

# ip_hash;

server 192.168.161.132:8080 weight=1;

server 192.168.161.132:8081 weight=1 backup;

    #5、fair(第三方)公平,需要安装插件才能用

    #按后端服务器的响应时间来分配请求,响应时间短的优先分配。   

    #6、url_hash(第三方)

    #按访问url的hash结果来分配请求,使每个url定向到同一个后端服务器,后端服务器为缓存时比较有效。

      # ip_hash;

             server 192.168.161.132:8080 weight=1;

             server 192.168.161.132:8081 weight=1;

      

      #fair

      #hash $request_uri

      #hash_method crc32

      

}

    server {

        #监听端口号

        listen       80;

        #服务名

        server_name  192.168.161.130;

        #字符集

        #charset utf-8;

#location [=|~|~*|^~] /uri/ { … }   

# = 精确匹配

# ~ 正则匹配,区分大小写

# ~* 正则匹配,不区分大小写

# ^~  关闭正则匹配

#匹配原则:

# 1、所有匹配分两个阶段,第一个叫普通匹配,第二个叫正则匹配。

# 2、普通匹配,首先通过“=”来匹配完全精确的location

        #   2.1、 如果没有精确匹配到, 那么按照最大前缀匹配的原则,来匹配location

        #   2.2、 如果匹配到的location有^~,则以此location为匹配最终结果,如果没有那么会把匹配的结果暂存,继续进行正则匹配。

        # 3、正则匹配,依次从上到下匹配前缀是~或~*的location, 一旦匹配成功一次,则立刻以此location为准,不再向下继续进行正则匹配。

        # 4、如果正则匹配都不成功,则继续使用之前暂存的普通匹配成功的location.

          #不是以波浪线开头的都是普通匹配。

        location / {   # 匹配任何查询,因为所有请求都以 / 开头。但是正则表达式规则和长的块规则将被优先和查询匹配。

   

    #定义服务器的默认网站根目录位置

            root   html;//相对路径,省略了./         /user/local/nginx/html  路径

            

    #默认访问首页索引文件的名称

    index  index.html index.htm;

    #反向代理路径

            proxy_pass http://myserver;

    #反向代理的超时时间

            proxy_connect_timeout 10;

            proxy_redirect default;

         }

          #普通匹配

location  /images/ {    

    root images ;

 }

           # 反正则匹配

location ^~ /images/jpg/ {  # 匹配任何以 /images/jpg/ 开头的任何查询并且停止搜索。任何正则表达式将不会被测试。

    root images/jpg/ ;

}

#正则匹配

location ~*.(gif|jpg|jpeg)$ {       

      #所有静态文件直接读取硬盘

              root pic ;

      

      #expires定义用户浏览器缓存的时间为3天,如果静态页面不常更新,可以设置更长,这样可以节省带宽和缓解服务器的压力

              expires 3d; #缓存3天

         }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html

        #

        error_page   500 502 503 504  /50x.html;

        location = /50x.html {

            root   html;

        }

    }

}

Guess you like

Origin blog.csdn.net/weixin_45481821/article/details/131411059
Recommended