Nginx installation, reverse proxy, load balancing and deployment projects

Nginx

1. Introduction to Nginx

Nginx is called: load balancer or static resource server: html, css, js, img

​ Nginx (pronounced "engine X") is a very lightweight HTTP server written by Russians. It is a high-performance HTTP and reverse proxy server, as well as an IMAP/POP3/SMTP proxy server. Nginx was developed by Russian Igor Sysoev for the second most visited Rambler.ru site in Russia, where it has been running for more than two and a half years. Igor Sysoev uses a BSD-based license when building the project. Since its release, Nginx has been known for its stability, rich feature set, sample configuration files, and low system resource consumption.

Application Scenario

  • http server: Nginx is an http service that can provide http services independently. It can be used as a web static server.

  • Virtual host: It is possible to virtualize multiple websites on one server. For example, a virtual host used by a personal website.

  • Reverse proxy/load balancing: When the website visits reach a certain level and a single server cannot satisfy user requests, multiple server clusters are required to use nginx as a reverse proxy. And multiple servers can share the load evenly, and there will be no situation where a server is idle due to a high load of a server.

Which domestic enterprises are using nginx server:

​ Domestic websites using Nginx: Sina, NetEase, Tencent, CSDN, Ku6, Shuimu Community, Douban, Liufang, Xiaomi, etc.

​ Technical forum: iteye, csdn, 51cto, blog garden...

Two, Nginx installation configuration

1. Installation

My nginx file directory is /etc/nginx Start with a file./nginx -c /etc/nginx/nginx.conf

 -- 升级yum
yum update 

 -- 安装nginx源
yum localinstall http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
 
 -- 安装nginx
 yum install nginx
 
 --查看版本号进入到/usr/sbin
 ./nginx -v
  
 --启动
 ./nginx -c /etc/nginx/nginx.conf

 --关闭
./nginx -s stop
 
 --重加载命令 (不是重启,修改文件后,文件重新加载)
 ./nginx -s reload

After installation, check the nginx version

[External link picture transfer failed, the source site may have an anti-leeching mechanism, it is recommended to save the picture and upload it directly (img-TnELsvc3-1681126154195) (C:\Users\lps\AppData\Roaming\Typora\typora-user-images\ image-20230410110544155.png)]

Three, Nginx reverse proxy

Forward proxy, the proxy is the client (common vpn)

reverse proxy, proxy server

The reverse proxy server decides which server provides the service.

Returns Proxy Server does not provide a server. It is also the forwarding of the request.

Case: Access two tomcat servers through reverse proxy in nginx

  • Configure the first tomcat

Unzip and rename to tomcat8080

Modify tomcat8080/webapps/ROOT/index.jsp

  • Configure the second tomcat

Unzip and rename to tomcat8081

  • Modify tomcat8088/webpps/ROOT/index.jsp

    insert image description here

  • Modify tomcat8081 configuration

  • insert image description here

    修改server.xml
    <Server port="8006" shutdown="SHUTDOWN">把端口改为没有使用的端口,如8006。
    <Connector port="8081" protocol="HTTP/1.1" connectionTimeout="20000"redirectPort="8443" /> 把端口改为没有是使用的端口,如8088。
    

    Start tomcat8080 and tomcat8081 respectively

  • configure nginx

    Configure upstream and proxy_pass in nginx.conf (/etc/nginx/conf.d/default.conf)

    	upstream tomcat {
    		server 127.0.0.1:8080;
    		server 127.0.0.1:8088;
    	 }
        server {
            listen       80;
            server_name  localhost;
            location / {
    			proxy_pass http://tomcat/;
            }
        }
    

Four, Nginx load balancing

If a service is provided by multiple servers, the load needs to be distributed to different servers for processing, which requires load balancing.

Load balancing strategy: Forwarding the specification followed by a specific server, nginx's upstream currently supports five ways of distribution

1)、轮询(默认),默认:每个请求按时间顺序逐一分配到不同的后端服务器,如果后端服务器down掉,能自动剔除。 
2)、weight:指定轮询几率,weight和访问比率成正比,用于后端服务器性能不均的情况。 (权重/加权),权重越高分配的请求越多,权重越低,请求越少。默认是都是1
3)、ip_hash:每个请求按访问ip的hash结果分配,这样每个访客固定访问一个后端服务器,可以解决session的问题。  
4)、fair(第三方):按后端服务器的响应时间来分配请求,响应时间短的优先分配。  
5)、url_hash(第三方):按访问url的hash结果来分配请求,使每个url定向到同一个后端服务器,后端服务器为缓存时比较有效。

日常使用轮询或weight

1. round robin

Modify /etc/nginx/conf.d/default.conf

在http节点里添加:
#定义负载均衡设备的 ip及设备状态 
upstream myServer{//要代理的服务器
	server 127.0.0.1:8080;
	server 127.0.0.1:9090;
}
server{
	listen 80;  //nginx服务器访问端口
	server_name localhost;  //nginx服务器地址
	
	location / {
		proxy_pass http://myServer/;//nginx代理的服务器地址
	}
}

There are 2 points to note in the above configuration:

  • The upstream configuration item is in the http configuration item, but outside the server configuration item, the overall structure of the three of them is as follows (don’t write in the wrong place)

    http {
        # 它们两者平级
        server { ... }
        upstream { ...}
    }
    
  • The name of the upstream you configure is custom, but do not -appear , otherwise it will conflict with tomcat.

    If you continue to visit, http://127.0.0.1you will find that the content of the page will alternately appear 8080port and 9090port

2. Weighted Round Robin

Weighted round robin is to add weight to each single point on the basis of round robin. The heavier the weight of a single point, the greater the amount of visits it will bear.

upstream myServer{//要代理的服务器
	server 127.0.0.1:8080 weight=1;
	server 127.0.0.1:9090 weight=2;
}

According to the above configuration, 9090the service of the port will bear 2/3 of the traffic, and 8080the port will bear 1/3 of the traffic.

After changing the configuration to the above and restarting Nginx, and then continue to visit, http://127.0.0.1you will find that 8080the port and 9090the port will 1-2-1-2-...appear alternately for the number of times.

In addition to weight , common state parameters are:

configuration method illustrate
max_fails The number of allowed request failures, the default is 1. Usually used in conjunction with fail_timeout below.
fail_timeout How long to pause the service after max_fails failures. During this time, this server Nginx will not request this Server
backup Reserved backup machines. It only assumes the load when other non-backup machines fail or are busy.
down Indicates that the current server does not participate in load balancing.

For example:

upstream myServer {
    server 192.168.78.128 weight=1 max_fails=1 fail_timeout=30s;
    server 192.168.78.200 weight=2 max_fails=1 fail_timeout=30s;
    server 192.168.78.201 backup;
    server 192.168.78.210 down;
}

3. ip_hash payload

The ip_hash method of load balancing is to distribute each request according to the hash result of the access IP, so that the client from the same IP can access a Web server regularly, thus solving the session sharing problem.

upstream myServer {
    ip_hash;
    server 127.0.0.1:8080;
    server 127.0.0.1:9090;
}

After using the above example configuration, you will find that no matter how many times you request, the port http://127.0.0.1you see is always one of8080 and .9090

5. Deploy the project to Nginx

1. Package the vue scaffolding project

​ Run: npm run build command to package the vue cli project.

​ The packaged content is in the dist directory of the project, copy the dist directory to the root directory of nginx

2. Modify nginx

After starting nginx, you can access the front-end project

If you publish to nginx, remove the vue.config.js proxy
insert image description here

3. Type the server project into a jar package and upload it to linux

[External link picture transfer failed, the source site may have an anti-theft link mechanism, it is recommended to save the picture and upload it directly (img-Dx6oij2G-1681126154196)(assets\image-20210907152833317.png)]

输入:java -jar 项目名  启动服务程序

4. Use nginx to solve cross-domain problems

  • Modify the nginx server configuration where the front end is located
 server {
        listen       80;
        server_name  localhost;

        location / {
            root   dist;#配置vue项目的根目录
            index  index.html;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
  • Modify the nginx server configuration file

    
    server {
        listen       80;
        server_name  localhost;
    
         location / {
             root   dist;
             index  index.html;
         }
    
         location /api {
    	    	proxy_pass http://127.0.0.1:8088;
    	    	rewrite "^/api/(.*)$" /$1 break;
         }
     
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   /usr/share/nginx/html;
        }
    
     
    }
    

5. Proxy_pass configuration problem

  • All client URLs http://localhost:80starting will access Nginx.

    This is similar to when we use Tomcat. For example:

    When you http://localhost:80/department/main.htmlvisit , it means that you request an html page from Nginx, and (combined with other related configurations) if it exists on the Nginx server, it will reply you with this html page.

  • Among all the requests sent by the client to Nginx, the requests whose URI /apistarts with are received by Nginx "for" others.

    All requests whose URI /apistarts with will be forwarded by Nginx to http://127.0.0.1:8080/apithe address and wait for its reply.

    For example, what you send http://localhost:80/api/departments/9527will be sent to by Nginx http://127.0.0.1:8080/api/departments/9527.

Nginx forwarding configuration rules

  1. No matter how you configure the content you proxy_passconfigure it will be "completely" included in the forwarding and destination paths in the end.
  2. The rules for forwarding and proxy_passsubtracting http://ip:porthave nothing to do with content. The least " contentful " case is just one /.
    • If there is "content" (even if there is only one /), the forwarding path is proxy_pass+ ( path- location)
    • If "no content", the forwarding path is proxy_pass+path
  3. It doesn't matter if location /ends with or not, because Nginx will think /that is itself (part of) the content of location.

` 。

Nginx forwarding configuration rules

  1. No matter how you configure the content you proxy_passconfigure it will be "completely" included in the forwarding and destination paths in the end.
  2. The rules for forwarding and proxy_passsubtracting http://ip:porthave nothing to do with content. The least " contentful " case is just one /.
    • If there is "content" (even if there is only one /), the forwarding path is proxy_pass+ ( path- location)
    • If "no content", the forwarding path is proxy_pass+path
  3. It doesn't matter if location /ends with or not, because Nginx will think /that is itself (part of) the content of location.

Guess you like

Origin blog.csdn.net/lps12345666/article/details/130067856