Software architecture-nginx detailed explanation

Today I talked about nginx. Actually, it’s weird to have an old iron. Isn't nginx done by operation and maintenance? Indeed, in most cases, if the company is relatively large, if there is operation and maintenance, this pot must be done by operation and maintenance. But now there is a trend called devops. I also said before that integration of development, operation and maintenance requires a certain knowledge of operation and maintenance for development. In some Internet start-ups, the types of technical work they are just starting out are not so clear. Environment construction, operation and maintenance, framework construction, and development must all be available.
Source code: https://github.com/limingios/netFuture/tree/master/nginx

Nginx service construction and basic demonstration (1)

Nginx

  • Official website

http://nginx.org/

  • Introduction

Nginx is a lightweight, high performance, high stability, and good concurrency HTTP and reverse proxy server. Because of its characteristics, its application is very wide.

  • history

Developed by Russian programmer Igor Sysoev, it was originally used by the Russian large-scale portal and search engine Rambler (Russian: Рамблер). Its characteristics are that it occupies less memory and has strong concurrency. In fact, the concurrency of nginx does perform better in the same type of web server. Currently, users of nginx websites in mainland China include: Sina, NetEase, Tencent, and other well-known micronets. Chi Plurk also uses nginx.

  • Understand the concept of agency

1. Forward proxy: In some cases, to proxy our users to access the server, the user needs to manually set the ip and port number of the proxy server.
2. Reverse proxy: it is used to proxy server, proxy the target server we want to visit. The proxy server accepts the request, then forwards the request to the server on the internal network (clustering), and returns the result from the server to the client. At this time, the proxy server acts as a server externally.

  • Machine configured according to source code

System type IP address Node role CPU Memory Hostname
Centos7 192.168.66.110 nginx 1 2G nginx
Centos7 192.168.66.111 tomcat 1 2G tomcat1
Centos7 192.168.66.112 tomcat 1 2G tomcat2

  • Three machines

Ready to work

yum -y install make zlib zlib-devel gcc-c++ libtool openssl openssl-devel

  • Install the pcre installation package on the nginx host (192.168.66.110)

https://sourceforge.net/projects/pcre/files/pcre/
nginx rewrite depends on the PCRE library, so you need to compile and install the PCRE library in the Linux system

wget https://nchc.dl.sourceforge.net/project/pcre/pcre/8.41/pcre-8.41.tar.gz
tar zxvf pcre-8.41.tar.gz
cd image.png
./configure
make && make install
pcre-config --prefix

  • Install nginx
cd ..
wget http://nginx.org/download/nginx-1.13.10.tar.gz
tar zxvf nginx-1.13.10.tar.gz
mkdir  nginx
cd nginx-1.13.10
./configure --prefix=/root/nginx --with-http_stub_status_module --with-http_ssl_module --with-pcre=/root/pcre-8.41
make && make install 
cd ~
cd nginx/sbin/
./nginx -v
./nginx -t

  • Start nginx
cd ~/nginx/sbin/
./nginx

Actually no permission

Modify the configuration file

vi ~/nginx/conf/nginx.conf
#修改成user root
#wq保存
#重新加载nginx新的配置
./sbin/nginx -s reload

  • 111 and 112 two machines to install tomcat
java -version
wget http://mirrors.hust.edu.cn/apache/tomcat/tomcat-8/v8.5.37/bin/apache-tomcat-8.5.37.tar.gz

tar zxvf apache-tomcat-8.5.37.tar.gz 
cd apache-tomcat-8.5.37
cd bin
./startup.sh
curl 127.0.0.1:8080

  • 111 and 112 Add an index.jsp file method to view
cd /root/apache-tomcat-8.5.37/webapps/ROOT
>index.jsp
vi index.jsp
cat index.jsp

###Upstream and location module parameters and case explanation (2)

In fact, future software tends to be modular. The assembly is complete.

upstream
  • Introduction of the official website

http://nginx.org/en/docs/

  • upstream instance

http://nginx.org/en/docs/http/ngx_http_upstream_module.html

  • upstream parameter
parameter name description
service Reverse service address plus port
weight Weights
max_fails How many failures do you think the host has hung up, kick it out. The default is 1 time. The enterprise generally configures 2 to 3 times, but e-commerce places more emphasis on user experience, so it is 1 time. The premise is that the supply of this machine is relatively large.
fail_timeout Re-detection time after kick
backup Backup service
max_conns Maximum number of connections allowed
slow_start When the node is restored, do not join immediately

Modify 66.110 that nginx configuration file

vi /root/nginx/conf/nginx.conf
cat /root/nginx/conf/nginx.conf

Because the weights are the same, the two tomcats 111 and 112 are rotated


yu

*Load balancing algorithm
1.ll+weight

The default load algorithm actually allocates service requests based on weights.

2.ip_hash

Hash-based computing application scenario: keep the session consistent, the first time you visit the one, it has always been the same one. hash(ip)%3 = index. Disadvantages: Where communities or schools are centralized, their IP addresses are the same, and the load on a certain node will be very, very large. Become a hot spot, with ip_hash causing the weight to become invalid.

3.url_hash

(Third party) application scenario: static resource caching, saving storage and speeding up

4.least_conn

Least link

5.least_time

For the smallest response time, calculate the average response time of the node, and then take the one with the fastest response and assign a higher weight.

6.keeplive

Number of connections occupied. The memory consumption is relatively large, but the response speed is fast, and the socket connection should be maintained.

  • location
    1.root

The following configuration is equivalent to re-pointing the input path in the browser

2.index

On the basis of separating the front and back ends, specify the initial page of the website

3.proxy_set_header

Used to redefine the request header sent to the backend server

4.proxy_pass

If you add / to the URL after proxy_pass, it means the absolute root path; if there is no /, it means a relative path, and the matched path part is also given to the proxy.

Dynamic and static separation scheme

There are two ways of separation

  1. Put static files into nginx
  2. The static files are put into the designated server, and the server is distinguished by the request address.
  • Put static files into nginx
#思路:动、静态的文件,请求时匹配不同的目录当访问gif,jpeg时 直接访问e:wwwroot;

server {
    
      
  listen       80;  
  server_name  localhost;  

  location / {
    
      
      root   e:wwwroot;  
      index  index.html;  
  }  

  # 所有静态请求都由nginx处理,存放目录为html  
  location ~ .(gif|jpg|jpeg|png|bmp|swf|css|js)$ {
    
      
      root    e:wwwroot;  
  }  

  # 所有动态请求都转发给tomcat处理  
  location ~ .(jsp|do)$ {
    
      
      proxy_pass  http://test;  
  }  

  error_page   500 502 503 504  /50x.html;  
  location = /50x.html {
    
      
      root   e:wwwroot;  
  }  
}  
  • Static server

Multiple upstreams, multiple locations, and location names are different.

PS: Finally, a popular saying, you are the king, I am your eunuch, upstream is the harem, there can be multiple harems, each harem has been allocated according to different fat and thin, harem A is all above 120. The harem B is all below 100. There are many beloved concubines in the harem. At night, the king has a demand. Tell the eunuch. According to the situation in the harem, the eunuch can use a certain algorithm to see that the concubine can go to bed. Tell the king, the king will go directly to me and tell his concubine where to use the algorithm.

Guess you like

Origin blog.csdn.net/zhugeaming2018/article/details/111149859