Nginx entry to the real - Basics

Nginx entry to the real - Basics

First, the environment

Server version: CentOS 7.2

In order to ensure the learning phase do not encounter strange things, please ensure that the following four points (selective Great God ignore)

  1. Confirm System Network

  2. Confirm yum available

  3. Confirm Close iptables

  4. Confirm Disable selinux

#查看iptables状态systemctl status firewalld.service#关闭防火墙(临时关闭)systemctl stop firewalld.service#查看SELinux状态getenforce#临时关闭SELinuxsetenforce 0

Some basic tools to install the system, the system will normally bring their own (not pretending oh)

yum -y install gcc gcc-c++ autoconf pcre pcre-devel make automakeyum -y install wget httpd-tools vim

Second, what Nginx that?

Nginx is an open source, high-performance, reliable middleware HTTP proxy service
to other HTTP service:

  1. HTTPD-Apache Foundation

  2. Microsoft IIS-

  3. GWS-Google (closed to the public)

In recent years, Nginx's increasing market share, surged Why? Then we will know!

 

Third, why do we choose Nginx?

1. IO multiplexing epoll (IO multiplexing)

How to understand it? To give an example!
There are A, B, C three teachers, they have encountered a problem, to help students solve a class of class assignments.
A teacher from the first row to start using a student answers a student turns way to answer questions, the teacher A waste a lot of time, and some students' work has not been completed yet, the teacher came, and repeatedly efficiency is extremely slow.
Teacher B is a ninja, he found that the method of A teacher does not work, so he used the shadow two places at once, a good spare yourself the same time to help several students to answer questions, not answering the last teacher B light energy consumption become tired.
C comparison shrewd teacher, he told the students, who completed the job by show of hands, the students have raised their hands before going to his guidance, he let the students take the initiative to voice, separate from the "concurrent."
The teacher C is Nginx.

2. Lightweight

  • Less functional modules - Nginx retaining only the modules required for HTTP, the other by way of plug-ins, add the day after tomorrow

  • Code modularity - more suitable for secondary development, such as Alibaba Tengine

3. CPU affinity

The CPU core work processes and Nginx binding, fixed to each worker process is executed on a CPU, reducing the cache miss switch the CPU to improve performance.

 

Fourth, the installation directory

I used lnmp Bird Brother integrated package https://lnmp.org, simple and convenient - Recommended!

#执行这句语句,根据指引,将安装 nginx php mysql 可进入lnmp官网查看更详细的过程#默认安装目录/usr/localwget -c http://soft.vpser.net/lnmp/lnmp1.4.tar.gz && tar zxf lnmp1.4.tar.gz && cd lnmp1.4 && ./install.sh lnmp
#默认安装目录/usr/local

Fifth, the basic configuration

#打开主配置文件,若你是用lnmp环境安装vim /usr/local/nginx/conf/nginx.conf
----------------------------------------
user #设置nginx服务的系统使用用户worker_processes #工作进程数 一般情况与CPU核数保持一致error_log #nginx的错误日志pid #nginx启动时的pid
events { worker_connections #每个进程允许最大连接数 use #nginx使用的内核模型}
 

We use nginx's http service, in the configuration file in nginx.conf in http area, numerous server configuration, each of which corresponds to a virtual server host or domain name

http {    ... ...        #后面再详细介绍 http 配置项目    server {        listen 80                          #监听端口;        server_name localhost              #地址        location / {                       #访问首页路径            root /xxx/xxx/index.html       #默认目录            index index.html index.htm     #默认文件        }        error_page  500 504   /50x.html    #当出现以上状态码时从新定义到50x.html        location = /50x.html {             #当访问50x.html时            root /xxx/xxx/html             #50x.html 页面所在位置        }    }    server {        ... ...    }}

A server can have multiple location, we configure different situations for different access paths
we look at the configuration details of http

http {    sendfile  on                  #高效传输文件的模式 一定要开启    keepalive_timeout   65        #客户端服务端请求超时时间    log_format  main   XXX        #定义日志格式 代号为main    access_log  /usr/local/access.log  main     #日志保存地址 格式代码 main}

Six modules

View nginx is turned on and programmed into the module with the module too much, not in this tirade, the need to own Baidu it ~

 

#大写V查看所有模块,小写v查看版本nginx -V# 查看此配置文件 是否存在语法错误nginx -tc /usr/local/nginx/conf/nginx.conf

 

 

Nginx entry to the real - Scenario articles

A static resource WEB Service

1. static resource type

Non-running server dynamically generated files, in other words, the file corresponding to the request can be found directly on the server

  1. Browser rendering: HTML, CSS, JS

  2. Picture: JPEG, GIF, PNG

  3. Video: FLV, MPEG

  4. File: TXT, download any file

2. Static Resource Service scenario -CDN

What is CDN? Beijing, for example, a user to request a file, and the file on the storage resource center in Xinjiang, Xinjiang direct request if the distance is too long delayed. Use nginx static resources back to the source, distributed storage resource center in Beijing, Beijing to let the dynamic positioning of storage resource center requested user's request, to achieve transmission delay is minimized

nginx static resource allocation

 

配置域:http、server、location#文件高速读取http {     sendfile   on;}#在 sendfile 开启的情况下,开启 tcp_nopush 提高网络包传输效率#tcp_nopush 将文件一次性一起传输给客户端,就好像你有十个包裹,快递员一次送一个,来回十趟,开启后,快递员讲等待你十个包裹都派件,一趟一起送给你http {     sendfile   on;     tcp_nopush on;}#tcp_nodelay 开启实时传输,传输方式与 tcp_nopush 相反,追求实时性,但是它只有在长连接下才生效http {     sendfile   on;     tcp_nopush on;     tcp_nodelay on;}

 

 

 

#将访问的文件压缩传输 (减少文件资源大小,提高传输速度)#当访问内容以gif或jpg结尾的资源时location ~ .*\.(gif|jpg)$ {    gzip on; #开启    gzip_http_version 1.1; #服务器传输版本    gzip_comp_level 2; #压缩比,越高压缩越多,压缩越高可能会消耗服务器性能    gzip_types   text/plain application/javascript application/x-javascript text/javascript text/css application/xml application/xml+rss image/jpeg image/gif image/png;     #压缩文件类型    root /opt/app/code;     #对应目录(去该目录下寻找对应文件)}
#直接访问已压缩文件#当访问路径以download开头时,如www.baidu.com/download/test.img#去/opt/app/code目录下寻找test.img.gz文件,返回到前端时已是可以浏览的img文件location ~ load^/download { gzip_static on #开启; tcp_nopush on; root /opt/app/code;}

Second, the browser cache

HTTP protocol defines a caching mechanism (eg: Expires; Cache-control, etc.)
to reduce the consumption of the server, reducing the delay

1. No browser cache

Browser requests -> no-cache -> requests WEB server -> corresponding request -> presentation

Generates cached in the browser cache settings based on the presentation stage

2. browsers cache

Browser requests -> There Cache -> check whether the local cache time expired -> not expired -> presentation

If the new request expired WEB server

3. Configuration Syntax

 

location ~ .*\.(html|htm)$ {    expires 12h;    #缓存12小时}

 

 

Static file server in response to the request header and will bring last_modified_since 2 etag value tag, the next browser request to the header information transmitted two tags, the server detects the file has not changed, if not, the header information directly etag back and last_modified_since, status code 304, the browser knows no change in the content, so a direct call the local cache, this process also request the service, but the pass with very little content

Third, cross-site access

Nginx develop cross-site access settings

 

location ~ .*\.(html|htm)$ {     add_header Access-Control-Allow-Origin *;     add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;     #Access-Control-Allow-Credentials true #允许cookie跨域}

 

 

When specifying Access-Control-Allow-Credentials in the response is true, Access-Control-Allow-Origin can not be specified as *, needed to specify the particular domain

Related content may cross-domain reference Laravel middleware function using cross-domain code for cross-domain, the same principle cross-domain configuration nginx

Fourth, anti-hotlinking

Prevent static resources within the server by other websites apply
nginx security chain described here is based on the way, other more in-depth manner described in the following article

First, we need to understand a variable nginx

 

$http_referer #表示当前请求上一次页面访问的地址,换句话说,访问 www.baidu.com 主页,这是第一次访问,所以 $http_referer 为空,但是 访问此页面的时候还需要获取一张首页图片,再请求这张图片的时候 $http_referer 就为 www.baidu.com

 

 

Then configure

 

location ~ .*\.(jpg|gif)$ {    #valid_referers 表示我们允许哪些 $http_referer 来访问    #none 表示没有带 $http_referer,如第一次访问时 $http_referer 为空    #blocked 表示 $http_referer 不是标准的地址,非正常域名等    #只允许此ip    valid_referers none blocked 127.xxx.xxx.xx    if ($invalid_referer) {     #不满足情况下变量值为1        return 403;    }}

Five, HTTP proxy service

Nginx can achieve a variety of ways Agent

  • HTTP

  • ICMPPOPIMAP

  • HTTPS

  • RTMP

1. Acting difference

The difference is that the proxy object is not the same

Forward Proxy proxy object is a client
reverse proxy server is a proxy object

2. Reverse Proxy

 

语法:proxy_pass URL默认:——位置:loaction
#代理端口#场景:服务器80端口开放,8080端口对外关闭,客户端需要访问到8080#在nginx中配置proxy_pass代理转发时,如果在proxy_pass后面的url加/,表示绝对根路径;如果没有/,表示相对路径,把匹配的路径部分也给代理走server {    listen 80;    location / {        proxy_pass http://127.0.0.1:8080/;        proxy_redirect default;        proxy_set_header Host $http_host;        proxy_set_header X-Real-IP $remote_addr; #获取客户端真实IP        proxy_connect_timeout 30; #超时时间        proxy_send_timeout 60;        proxy_read_timeout 60;        proxy_buffer_size 32k;        proxy_buffering on; #开启缓冲区,减少磁盘io        proxy_buffers 4 128k;        proxy_busy_buffers_size 256k;        proxy_max_temp_file_size 256k; #当超过内存允许储蓄大小,存到文件    }}

 

 

Nginx entry to the actual - load balancing and caching services

First, load balancing

Load balancing to achieve this is the last chapter we introduce the reverse proxy. The client requests via nginx Distribution (reverse proxy) to a group of multiple different servers

This group we call service server pool (upstream server), pool for each server is called a unit, each unit will pool service request rotation, load balancing

 

#配置语法:upstream name ...默认:——位置:http
upstream #自定义组名 {    server x1.baidu.com;    #可以是域名    server x2.baidu.com;    #server x3.baidu.com                            #down         不参与负载均衡                            #weight=5;    权重,越高分配越多                            #backup;      预留的备份服务器                            #max_fails    允许失败的次数                            #fail_timeout 超过失败次数后,服务暂停时间                            #max_coons    限制最大的接受的连接数                            #根据服务器性能不同,配置适合的参数    #server 106.xx.xx.xxx;        可以是ip    #server 106.xx.xx.xxx:8080;   可以带端口号    #server unix:/tmp/xxx;        支出socket方式}

 

Suppose we have three servers, their IP addresses and it is assumed, the front end load balancing server A (127.0.0.1), backend server B (127.0.0.2), backend server C (127.0.0.3)

New File proxy.conf, follows the reverse proxy configuration described in the previous chapter

 

proxy_redirect default;proxy_set_header Host $http_host;proxy_set_header X-Real-IP $remote_addr;proxy_connect_timeout 30;proxy_send_timeout 60;proxy_read_timeout 60;proxy_buffer_size 32k;proxy_buffering on;proxy_buffers 4 128k;proxy_busy_buffers_size 256k;proxy_max_temp_file_size 256k;
#服务器A的配置http {    ...    upstream xxx {        server 127.0.0.2;        server 127.0.0.3;    }    server {        liseten 80;        server_name localhost;        location / {            proxy_pass http://xxx     #upstream 对应自定义名称            include proxy.conf;        }    }}
#服务器B、服务器C的配置server {    liseten 80;    server_name localhost;    location / {         index  index.html    }}

 

Scheduling Algorithm

  • In rotation: individually assigned to different chronological backend server

  • Weighting in rotation: weight larger the value, the higher the probability assigned to

  • ip_hash: hash results for each access request in the allocated IP, so that access from the same IP fixing a backend server

  • least_conn: Minimum number of links, which connect a small number of machines will be distributed to anyone

  • url_hash: hash result in accordance with the URL to access allocation request, each URL directed to the same backend server

  • hash key values: hash custom key

ip_hash Configuration

 

  upstream xxx {        ip_hash;        server 127.0.0.2;        server 127.0.0.3;  }

 

ip_hash flawed, more layer of the current terminal server, the user can not obtain the correct IP, IP will be acquired before a front-end server, so nginx1.7.2 version launched url_hash

url_hash Configuration

 

  upstream xxx {        hash $request_uri;        server 127.0.0.2;        server 127.0.0.3;  }

Second, the caching service

1. Cache Type

  • Server-side caching: caches stored in the back-end server, such as redis, memcache

  • Proxy cache: cache stored on the proxy server or middleware, its contents are taken from the back-end server, but to save their own local

  • The client cache: cache in the browser

2. nginx proxy cache
client requests nginx, nginx to see whether the local data cache, if directly back to the client, if not go to the back-end server requests

 

http {    proxy_cache_path    /var/www/cache #缓存地址                        levels=1:2 #目录分级                        keys_zone=test_cache:10m #开启的keys空间名字:空间大小(1m可以存放8000个key)                        max_size=10g #目录最大大小(超过时,不常用的将被删除)                        inactive=60m #60分钟内没有被访问的缓存将清理                        use_temp_path=pff; #是否开启存放临时文件目录,关闭默认存储在缓存地址    server {        ...        location / {            proxy_cache test_cache;    #开启缓存对应的名称,在keys_zone命名好            proxy_cache_valid 200 304 12h;    #状态码为200 304的缓存12小时            proxy_cache_valid any 10m;    #其他状态缓存10小时            proxy_cache_key $host$uri$is_args$args; #设置key值            add_header Nginx-Cache "$upstream_cache_status";        }    }}

 

 

When there is a specific request we do not need caching, add the following configuration in the configuration of the content above

 

server {    ...    if ($request_uri ~ ^/(login|register) ) {    #当请求地址有login或register时        set $nocache = 1;    #设置一个自定义变量为true    }    location / {        proxy_no_cache $nocache $arg_nocache $arg_comment;        proxy_no_cache $http_pragma $http_authoriztion;    }}

 

3. The segment request

Early versions of nginx for slicing large files request does not support caching, the 1.9 version of the slice module implements this feature
front-end initiate a request, nginx to acquire the size of the requested file, if the size of our definition slice more than would be sliced, divided into a plurality of smaller requests to request the rear end to the front end becomes a separate cache file a

Advantage: Each child received request data will form a separate file, a request is interrupted, other requests are not affected, the original situation an interrupt request, the file will be requested again from the beginning, and slice open the request, the next unsolicited acquisition small file

Disadvantages: When large or very small slice file may lead to depletion of file descriptors, etc.

 

 语法:slice size;    #当大文件请求时,设置size为每个小文件的大小 默认:slice 0; 位置:http/server/location

Nginx entry to the real - Frequently Asked Questions

First, the same server_name multiple virtual host priority

 

#当出现虚拟主机域名相同的情况,重启nginx时,会出现警告处理,但是并不不会阻止nginx继续使用server {    listen 80;    server_name www.baidu.com    ...}server {    listen 80;    server_name www.baidu.com    ...}...优先选择最新读取到的配置文件,当多个文件是通过include时,文件排序越靠前,越早被读取

 

Two, location matched priority

=        #进行普通字符精确匹配,完全匹配 ^~       #进行普通字符匹配,当前表示前缀匹配 ~\~*     #表示执行一个正则匹配()
#当程序使用精确匹配时,一但匹配成功,将停止其他匹配#当正则匹配成功时,会继续接下来的匹配,寻找是否还有更精准的匹配

Three, try_files use

In order to check whether a file exists

 

location / {    try_files $uri $uri/ /index.php;}
#先查找$uri下是否有文件存在,若存在直接返回给用户#若$url下没有文件存在,再次访问$uri/的路径是否有文件存在#还是没有文件存在,交给index.php处理
例:location / { root /test/index.html try_files $uri @test}
location @test { proxy_pass http://127.0.0.1:9090;}
#访问 / 时,查看 /test/index.html 文件是否存在#若不存在,让9090端口的程序去处理这个请求

 

Four difference, alias and the root

location /request_path/image/ {    root /local_path/image/;}
#当我们访问 http://xxx.com/request_path/image/cat.png时#将访问 http://xxx.com/request_path/image/local_path/image/cat.png 下的文件
location /request_path/image/ { alias /local_path/image/;}
#当我们访问 http://xxx.com/request_path/image/cat.png时#将访问 http://xxx.com/local_path/image/cat.png 下的文件

 

Fifth, if the user's real IP

When a plurality of requests through a proxy server, the user's IP proxy IP will be covered

 

#在第一个代理服务器中设置    set x_real_ip=$remote_addr#最后一个代理服务器中获取    $x_real_ip=IP1

 

Six, Nginx common error codes

 

413 Request Entity Too Large    #上传文件过大,设置 client_max_body_size503 bad gateway                 #后端服务无响应504 Gateway Time-out            #后端服务执行超时

Guess you like

Origin www.cnblogs.com/geass-jango/p/11588712.html