Nginx ---- Request Distribution Center

Nginx request distribution center, we need to clear a few basic question, what is a request, what is the distribution center, why distribution center.

  1. What is a request?

Niginx HTTP server is a high performance, where the request is, of course, refers to the http protocol request sent by a client. Request http protocol (commonly used POST and GET) are targeted by URL, the URL is the core of the IP address and port number. The idea came out here a few concepts, IP address is valid, the port number is valid, POST and GET is what?

 

IP simply understood as the identity of network devices, we can by IP can only locate a network device, of course, the positioning process involves many complicated routing layer protocols such as parsing package forwarding process, not delve into here. Now visit the website not directly use IP, the general use of the domain name access, domain name appears (www.mi.com) is designed to solve two problems: a problem is ideographic poor IP (users can not understand the function of the site based on IP) another problem is that there is a possibility the same IP site changes (such as business growth has led to the expansion of server clusters need to replace IP). Domain name does not change the nature of the URL location server, but will remember the user's own IP-burden to the domain name server, the company apply for a domain name, the domain name server IP bound to submit to the domain name administrator, you can automatically domain name with the corresponding server IP bindings.

 

Port Number: This refers to the protocol port, the same IP address port 65536 (0-65535) months. Port number can be understood as the image of the hotel where the house number, you need to specify the specific port number to open the corresponding room to facilitate access to services when we request resources from one device, the other room is not implemented HTTP protocol does not provide the key to the door that is inaccessible. HTTP protocol default port number is 80.

 

POST request and the type of GET: two kinds of GET and POST HTTP protocol request for the predetermined format, the underlying principles are implemented via TCP protocol to transfer data, primarily from the difference between the two different browsers predetermined criteria when implementing different HTTP use. Please refer to the detailed interpretation of Bowen: https: //www.cnblogs.com/logsharing/p/8448446.html

 

Continue to quote a problem TCP transport protocol:

TCP (Transmission Control Protocol, Transmission Control Protocol) is a connection-oriented protocol, prior to sending and receiving data, a reliable connection must be established and each other. Corresponding thereto is UDP (User Data Protocol, User Datagram Protocol), UDP protocol is a non-connected, and a source terminal connection is not established before the data transmission, when it wants to transmit is simply to grab from the application data, and as quickly as possible to throw it on the network. The main difference between the two is whether the need to focus on connections, TCP data transmission is carried out in the case of determining the connection, in order to ensure the reliability of transmission; and UDP transport at the expense of the reliability of the cost in exchange for efficiency. Several contradictions software system has always existed, reliability and efficiency of contradiction, the time cost and space cost of contradiction.

Classic problem in the TCP protocol, the four-way handshake to establish a three-way handshake connection with the release of the connection.

Disconnect reason than to establish the connection once more that closed the passive side B is likely to close upon receipt of request and data is not transmitted and the need to wait after the data has been sent to turn off sending function, so a unified response and close off the other side to send one's own transmitting the request requires twice, resulting in more time than the establishment of the connection. TCP / IP protocol suite contains a lot of agreement, not to undertake here.

 

    2. Back to the initial question raised, what is the distribution center? Why do we need a distribution center?

The role of distribution centers can refer to the train station, different people (request) need to go to different places, railway stations (Distribution Center) shunt according to different destinations, so that everyone boarded the right train. Each train (per server) of limited capacity, in order to meet more travel demand (request), need to schedule more trains to provide services. The meaning of existence and distribution center here is the distribution schedule.

 

In the Internet architecture, this distribution center known as a reverse proxy server.

    Reverse proxy (Reverse Proxy) mode refers to the proxy server to accept connection requests on the internet, and then forwards the request to the server on the internal network, and returns the result obtained from the server to the client on request internet connection, At this point the external proxy server on the performance of a reverse proxy server.

    As illustrated, the maximum number of concurrent requests Tomcat calculated by 1000, can not meet the business needs ultrahigh concurrent requests scene, this time need Nginx request forwarding, forwarding the request to multiple servers equalization.

    有反向代理自然也有正向代理,正向代理是代理客户端的请求,将请求转发到internet,多应用于网络权限管控的场景,个人接触较多的即FQ使用的代理服务器。反向代理是代理来自internet的请求,针对服务器。

 

以上简述了Nginx作为反向代理的重要作用,下面介绍Nginx是什么,还有什么功能。

Nginx 是一款高性能的 http 服务器/反向代理服务器及电子邮件(IMAP/POP3)代理服务器。由俄罗斯的程序设计师伊戈尔·西索夫(Igor Sysoev)所开发,官方测试 nginx 能够支支撑 5 万并发链接,并且 cpu、内存等资源消耗却非常低,运行非常稳定。

 

Nginx的应用场景:

1、http 服务器。Nginx 是一个 http 服务可以独立提供 http 服务。可以做网页静态服务器。将静态网页资源直接部署在Nginx服务器中,用户可以直接访问Nginx获得网页。

2、虚拟主机。可以实现在一台服务器虚拟出多个网站,例如个人网站使用的虚拟主机。实现原理仍是Nginx的转发功能,需要在访问主机的hosts配置域名与IP的映射关系,便于HTTP请求进行资源定位。Nginx根据收到的请求进行转发。

3、反向代理,负载均衡。当网站的访问量达到一定程度后,单台服务器不能满足用户的请求时,需要用多台服务器集群可以使用 nginx 做反向代理。并且多台服务器可以平均分担负载,不会因为某台服务器负载高宕机而某台服务器闲置的情况。

 

可以看出Nginx反向代理试用于多服务组成的服务集群场景,此时Nginx作为服务的请求入口,其自身需实现高可用保证集群的业务稳定。

可以使用keepalived实现Nginx高可用,keepalived 是集群管理中保证集群高可用的一个服务软件,用来防止单点故障。

Keepalived 的作用是检测 web 服务器的状态,如果有一台 web 服务器死机,或工作出现故障,Keepalived 将检测到,并将有故障的 web 服务器从系统中剔除,当 web 服务器工作正常后 Keepalived 自动将 web 服务器加入到服务器群中,这些工作全部自动完成,不需要人工干涉,需要人工做的只是修复故障的 web 服务器。

keepalived 是以 VRRP 协议为实现基础的,VRRP 全称 Virtual Router Redundancy Protocol,即虚拟路由冗余协议。

虚拟路由冗余协议,可以认为是实现路由器高可用的协议,即将 N 台提供相同功能的

路由器组成一个路由器组,这个组里面有一个 master 和多个 backup,master 上面有一个对外提供服务的 vip(VIP = Virtual IPAddress,虚拟 IP 地址,该路由器所在局域网内其他机器的默认路由为该 vip),master 会发组播,当 backup 收不到 VRRP 包时就认为 master 宕掉了,这时就需要根据 VRRP 的优先级来选举一个 backup 当 master。这样的话就可以保证路由器的高可用了。

Guess you like

Origin www.cnblogs.com/beichenroot/p/10990968.html