Nginx learn, to see which one is enough. Operation and maintenance necessary Collection (overcoming all obstacles, through the trials)

table of Contents

Foreword

A, nginx Profile

1. What is nginx and can do anything

2.Nginx as a web server

3. Forward Proxy

4. Reverse Proxy

5. Load Balancing

6. static and dynamic separation

Two, Nginx installation (Linux: centos for example)

1. Preparations

2. Start the installation

3. Run nginx

4. firewall issues

Three, Nginx commonly used commands and configuration files

1. Nginx commonly used commands

a. Using the operation command provided nginx

b. Check the version number of nginx

c. Start nginx

d. Close nginx

e. Replace loading nginx

2. Nginx configuration file

a. profile location

Part b. Nginx of

Four, Nginx reverse proxy configuration examples 1.1

1. To achieve the effect

2. Preparation


Foreword

A, nginx Profile

1. What is nginx and can do anything

  • Nginx is a high performance HTTP reverse proxy and web servers, high concurrency processing capabilities are very powerful, able to withstand the test of high load, has been reported that can support up to 50,000 concurrent connections.
  • It features occupy less memory, high concurrency, the ability to do concurrent nginx fact the same type of web server performance is better, mainland China use nginx web site users are: Baidu, Jingdong, Sina, Netease, Tencent, Taobao.

2.Nginx as a web server

  • Nginx can be used as static pages web server also supports dynamic languages CGI protocol, such as perl, php and so on. But it does not support java. Java programs can only be done through cooperation with tomcat. Nginx designed for performance optimization and development, performance is the most important consideration, attaches great importance to the realization of efficiency, able to withstand the test of high load, it has been reported that can support up to 50,000 concurrent connections. https://lnmp.org/nginx.html

3. Forward Proxy

Nginx can not only do a reverse proxy, load balancing. But also as a forward proxy to access other functions. Forward proxy: If the Internet outside the LAN imagine a huge resource library, LAN clients to access the Internet, you need to access through a proxy server, the proxy service is called forward proxy.

  • Simple point: the process to access the server through a proxy server called forward proxy.
  • We need to configure a proxy server for client access to designated sites

4. Reverse Proxy

  • Reverse proxy, in fact, the client can access the proxy is not aware of, because the client does not need any configuration .
  • We only need to send the request to the reverse proxy server, after the reverse proxy server to select the target server to get data back to the client in this case the reverse proxy server and the target server is a server outside, exposed to a proxy server address, hiding the true IP address of the server.

5. Load Balancing

  • Increase the number of servers, then distribute requests to each server, and the original request for the case to focus on a single server instead distribute requests across multiple servers, the load will be distributed to different servers, what we call the load balanced
  • The client sends multiple requests to the server, the server processing the request, there are likely to interact with the database, the server is processed, then the results returned to the client.

 This architecture model for the earlier systems is relatively simple, relatively few concurrent requests is more appropriate circumstances, the cost is low. But with the growing number of information, the rapid growth of traffic and the amount of data and the complexity of the systems business increased, this architecture will result in a request for the appropriate client server increasingly slow, particularly when the amount of concurrency, but also likely to cause the server direct crash. Obviously this is a problem because the server performance bottlenecks caused, then how to solve this situation?

We first thought may be configured to upgrade the server, such as raising the execution frequency of CPU, memory, etc. to increase improve the physical properties of the machine to solve this problem, but we know that Moore's Law is increasingly failure, hardware performance can not meet the increasing It needs. The most obvious example, Lynx double eleven day, the instantaneous traffic from a hot commodity is extremely large, so similar to the above system architecture, the machines are added to the existing top-level physical configuration, they are not able to to meet the demand. So how to do it? The above analysis we removed the physical server configuration to increase the solution to the problem, that is to say a longitudinal approach to solve the problem does not work, then the number of lateral adding servers it? This time the concept of clusters created, a single server can not be resolved, we increase the number of servers, then distribute requests to each server, and the original request for the case to focus on a single server instead distribute requests across multiple servers, load distribution to different servers, what we call load balancing.

6. static and dynamic separation

In order to speed up resolution of the site, you can put dynamic pages and static pages to resolve by different servers, speed up resolution. Reducing the pressure of the original single server.

Two, Nginx installation (Linux: centos for example)

When nginx install, can go to the official website to download the latest version, centos version.

1. Preparations

  • Open the virtual machine, using the Linux operating system finallshell link
  • Nginx to download software
    http://nginx.org/

  • Install its software dependent, and finally install nginx.
  • Tools dependence: PCRE-8.3.7.tar.gz, OpenSSL-1.0.1t.tar.gz, zlib-1.2.8.tar.gz, Nginx-1.11.1.tar.gz.  I offer here under.

2. Start the installation

  • There are two methods for a direct download, unzip the package to use the second way. Here most of the use-extracting archive mode.
  • My installation path: / usr / feng /
  1. Install pcre

方式一、wget http://downloads.sourceforge.net/project/pcre/pcre/8.37/pcre-8.37.tar.gz 。
方拾二、上传源码压缩包,解压、编译、安装 三部曲。
1)、解压文件, 进入pcre目录,
2)、./configure 完成后,
3)、执行命令: make && make install

  1. 安装 openssl

下载OpenSSL的地址:
http://distfiles.macports.org/openssl/
1)、解压文件, 回到 pcre 目录下,
2)、./configure 完成后,
3)、执行命令: make && make install

  1. 安装zlib

1)、解压文件, 回到 pcre 目录下,
2)、./configure 完成后,
3)、执行命令: make && make install

  1. 安装nginx

1)、解压文件, 回到 pcre 目录下,
2)、./configure 完成后,
3)、执行命令: make && make install

3. 运行nginx

  • 安装完nginx后,会在 路径 /usr/local 下nginx 的文件夹。这是自动生成的。
  • 进入这个目录:

cd /usr/local/nginx

目录内容如下:

  • 进入sbin文件夹,里面有两个文件:nginx 和 nginx.old。
  • 执行命令:./nginx 即可执行
  • 测试启动: ps -ef | grep nginx

已经启动。

  • 查看nginx默认端口(默认为80),使用网页的形式测试,(像Tomcat一样。)
  • 进入目录查看端口:cd /usr/local/nginx/conf 下的 nginx.conf文件。这个文件也是nginx的配置文件。vim 下:

如下

  • 输入IP:80,则显示:

4. 防火墙问题

在 windows 系统中访问 linux 中 nginx,默认不能访问的,因为防火墙问题 (1)关闭防火墙 (2)开放访问的端口号,80 端口

查看开放的端口号

firewall-cmd --list-all

设置开放的端口号

firewall-cmd --add-service=http –permanent

firewall-cmd --add-port=80/tcp --permanent

重启防火墙

firewall-cmd –reload

三、 Nginx 的常用命令和配置文件

1. Nginx常用命令

a. 使用nginx操作命令前提

使用nginx操作命令前提:必须进入到nginx的自动生成目录的下/sbin文件夹下

nginx有两个目录:

第一个:安装目录,我放在:

/usr/feng/

第二个:自动生成目录:

/usr/local/nginx/

b. 查看 nginx 的版本号

./nginx -v

c. 启动 nginx

./nginx

d. 关闭nginx

./nginx -s stop

e. 重新加载 nginx

在目录:/usr/local/nginx/sbin 下执行命令,不需要重启服务器,自动编译。

./nginx -s reload

2. Nginx配置文件

a. 配置文件位置

/usr/local/nginx/conf/nginx.conf

 

b. nginx 的组成部分

配置文件中有很多#, 开头的表示注释内容,我们去掉所有以 # 开头的段落,精简之后的 内容如下:

worker_processes  1;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

    server {
        listen       80;
        server_name  localhost;

        location / {
            root   html;
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

  • nginx 配置文件有三部分组成

第一部分:全局块

从配置文件开始到 events 块之间的内容,主要会设置一些影响nginx 服务器整体运行的配置指令,主要包括配 置运行 Nginx 服务器的用户(组)、允许生成的 worker process 数,进程 PID 存放路径、日志存放路径和类型以 及配置文件的引入等。
比如上面第一行配置的:

worker_processes 1;

这是 Nginx 服务器并发处理服务的关键配置,worker_processes 值越大,可以支持的并发处理量也越多,但是 会受到硬件、软件等设备的制约。

第二部分:events块

比如上面的配置:

events {

        worker_connections 1024;

}

events 块涉及的指令**主要影响 Nginx 服务器与用户的网络连接,常用的设置包括是否开启对多 work process 下的网络连接进行序列化,是否 允许同时接收多个网络连接,选取哪种事件驱动模型来处理连接请求,每个 word process 可以同时支持的最大连接数等。**
上述例子就表示每个 work process 支持的最大连接数为 1024.
这部分的配置对 Nginx 的性能影响较大,在实际中应该灵活配置。
第三部分:

http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;

    server {
        listen       80;
        server_name  localhost;

        location / {
            root   html;
            index  index.html index.htm;
        }
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

这算是 Nginx 服务器配置中最频繁的部分,代理、缓存和日志定义等绝大多数功能和第三方模块的配置都在这里。

需要注意的是:http 块也可以包括 http全局块、server 块。

  • http全局块

http全局块配置的指令包括文件引入、MIME-TYPE 定义、日志自定义、连接超时时间、单链接请求数上限等。

  • server 块

这块和虚拟主机有密切关系,虚拟主机从用户角度看,和一台独立的硬件主机是完全一样的,该技术的产生是为了 节省互联网服务器硬件成本。
每个 http 块可以包括多个 server 块,而每个 server 块就相当于一个虚拟主机。
而每个 server 块也分为全局 server 块,以及可以同时包含多个 locaton 块。

  1. 全局 server 块

最常见的配置是本虚拟机主机的监听配置和本虚拟主机的名称或IP配置。

  1. location 块

一个 server 块可以配置多个 location 块。
这块的主要作用是基于 Nginx 服务器接收到的请求字符串(例如 server_name/uri-string),对虚拟主机名称 (也可以是IP 别名)之外的字符串(例如 前面的 /uri-string)进行匹配,对特定的请求进行处理。 地址定向、数据缓 存和应答控制等功能,还有许多第三方模块的配置也在这里进行。

四、 Nginx 反向代理 配置实例 1.1

1. 实现效果

  • 打开浏览器,在浏览器地址栏输入地址 www.123.com,跳转到 liunx 系统 tomcat 主页 面中

2. 准备工作

转载自:https://blog.csdn.net/qq_40036754/article/details/102463099

发布了22 篇原创文章 · 获赞 27 · 访问量 5621

Guess you like

Origin blog.csdn.net/k_love1219/article/details/103972313