Introduction to Nginx and reverse proxy, load balancing, static and dynamic separation understanding

Scenes

Nginx Profile

Nginx ( "engine x") is a high-performance HTTP server and reverse proxy feature is the possession of less memory, concurrent capable
and strong, in fact, the ability to do concurrent nginx in the same type of web server performance is better, mainland China use nginx
web site users are: Baidu, Jingdong, Sina, Netease, Tencent, Taobao.

Nginx can be used as static pages web server also supports dynamic languages CGI protocol, such as perl, php
and so on. But it does not support java. Java programs can only be done through cooperation with tomcat. Nginx developed specifically for performance optimization,
performance is very focused on the most important considerations to achieve efficiency, able to withstand the test of high load has been reported that can support high
of 50,000 concurrent connections.

Note:

Blog:
https://blog.csdn.net/badao_liumang_qizhi
public concern number of
programs overbearing ape
acquisition-related programming e-books, tutorials and push for free download.

achieve

Reverse Proxy

Before understanding the reverse proxy to understand what is forward proxy.

Forward proxy:

If outside the local area network
Internet imagine a huge resource library, LAN clients to access the Internet, you need to access through a proxy server, the proxy service is called forward proxy.

Forward proxy, at the user end. Such as the need to access some foreign websites, we may need to purchase v * n.

And v * n is in our end user's browser settings (not the remote server settings).

Browser to access the address v * n, v * n address to forward the request and the final result of the request to backtrack.

 

 

Reverse Proxy:

For clients who do not know, you do not need to configure any client, simply send the request to the reverse proxy server. After the reverse proxy server to select the target server to get data back to the client, then the reverse proxy server and the target server is a server outside, exposed to a proxy server
address, hiding the true IP address of the server.

 

 

 

Load Balancing

    The client sends multiple requests to the server, the server processing the request, there are likely to interact with the database, the server is processed, then the results returned to the client.

     This architecture model for the earlier systems is relatively simple, relatively few concurrent requests is more appropriate circumstances, the cost is low. But with the growing number of information, the rapid growth of traffic and the amount of data and the complexity of the systems business increased, this architecture will result in a request for the appropriate client server increasingly slow, particularly when the amount of concurrency, but also likely to cause the server direct crash. Obviously this is a problem because the server performance bottlenecks caused, then how to solve this situation? We first thought may be configured to upgrade the server, such as raising the execution frequency of CPU, memory, etc. to increase improve the physical properties of the machine to solve this problem, but we know that Moore's Law is increasingly failure, hardware performance can not meet the increasing It needs. The most obvious example, Lynx double eleven day, the instantaneous traffic from a hot commodity is extremely large, so similar to the above system architecture, the machines are added to the existing top-level physical configuration, they are not able to to meet the demand. So how to do it?

    The above analysis we removed the physical server configuration to increase the solution to the problem, that is to say a longitudinal approach to solve the problem does not work, then the number of lateral adding servers it? This time the concept of clusters created, a single server can not be resolved, we increase the number of servers, then distribute requests to each server, and the original request for the case to focus on a single server instead distribute requests across multiple servers, load distribution to different servers, what we call load balancing

Here's an example of a useful load balancing and Ribbon Feign implemented:

SpringCloud- create a service consumer -Feign way (with code to download):

https://blog.csdn.net/BADAO_LIUMANG_QIZHI/article/details/102595895

SpringCloud- create a service consumer -Ribbon way (with code to download):

https://blog.csdn.net/BADAO_LIUMANG_QIZHI/article/details/102558080

 

 

Static and dynamic separation

In order to speed up resolution of the site, you can put dynamic pages and static pages to resolve by different servers to speed up the parsing speed
degrees. Reducing the pressure of the original single server.

 

 

 

Guess you like

Origin www.cnblogs.com/badaoliumangqizhi/p/11756315.html