Web - load balancing, high concurrency

Load balancing (Load Balance)

It is built on top of existing network infrastructure to provide a cheap and effective way to extend the bandwidth of transparent network devices and servers increase throughput, enhance network data processing capability, flexibility, and availability of the network.

Means that spread over multiple operating units to perform, such as Web servers, FTP servers, business-critical application servers and other mission-critical servers, so as to work together to complete the task.

 

 

 

                   Load balancing schematic

Load balancing algorithm

Round Robin: allocated in time sequence to a different server, wherein when a server goes down were automatically removed, switch to the normal server.

Weighting algorithm: according to the proportion of heavy weights assigned to the server to be distributed to different servers, the higher the proportion of heavy weights, the greater the chance of access.

IP Binding (ip_hash ): According to hash result to determine IP access, the same fixed IP access to a back-end server, while addressing the session the issue of dynamic pages.

High concurrency

High concurrent evolution of ideas

Highly concurrent systems are different. For example megabits per second concurrent middleware system, gateway system daily ten billion request, spike big promotion system hundreds of thousands of requests per second instantaneous.

In response to high concurrency when, because of the different systems each with its own characteristics, so deal with architecture is not the same.

One of the most simple system architecture

 

Systems in the cluster deployment

 

+ Sub-table database sub-library separate read and write

 

Cache clusters introduced

 

The introduction of messaging middleware clusters

 

  High concurrency topic itself is very complex, not far from some of the articles can be said clearly, his essence is that real support complex business scenarios in highly concurrent systems architecture is actually very complicated;

  Spike big promotion system for example megabits per second concurrent middleware system, gateway system ten billion daily requests, hundreds of thousands of requests per second, instantaneously, supporting hundreds of millions of users of large-scale and high-power business platform architecture, etc. ;

  Presentation of complex systems architecture complexity far beyond most of the students did not contact the imagination;

1, process, thread, coroutine, asynchronous, non-blocking; - Learn to use

2, Mysql, redis, mongoDB; - understand and use;

【高并发解决方案相关面试题】

DNS解析域名

DNS域名解析就是将域名转化为不需要显示端口(二级域名的端口一般为80)的IP地址,域名解析的一般先去本地环境的host文件读取配置,解析成对应的IP地址,根据IP地址访问对应的服务器.若host文件未配置,则会去网络运营商获取对应的IP地址和域名.

Nginx

Nginx是一个高级的轻量级的web服务器,由俄罗斯科学家开发的,具有如下优点:

1.占用内存少,并发量强,支持多种并发连接,效率高.

2.能够作为负载均衡服务器和(内部直接支持Rails和PHP)代理服务器.Nginx用C编写开销和CPU占有小.

3.安装启动简单,配置简洁,bug少,一般几个月不需要重新启动且不会宕机,稳定性和安全性好.

作用:反向代理、负载均衡、配置主备tomcat、动静分离

应用场景:做HTTP服务器、反向代理服务器、静态资源服务器

反向代理:代替真实服务器接收网络请求,然后将请求转发到真实服务器

反向代理的作用:隐藏真实服务器,使真实服务器只能通过内网访问,保护了真实服务器不被攻击.配置负载均衡,减轻单台真实服务器的压力.配置主备服务器,保持服务稳定运行.

Nginx如何配置反向代理:

首先到DNS服务器做域名解析,如果是局域网在hosts文件中配置IP和域名对应关系.编辑nginx的nginx.conf文件,配置server_name为指向nginx服务器的域名,location拦截请求,如果是访问nginx本地资源则配置root,如果是反向代理到真实服务器则配置proxy_pass为服务器地址

说说常用Nginx的相关配置:

upstream负载均衡配置

server[IP][weight][backup]配置tomcat集群

proxy_connect_timeout、proxy_read_timeout、proxy_send_timeout连接时间、真实服务器响应时间、返回结果时间

location匹配用户请求的url

root配置本地资源路径

proxy_pass配置真实服务器地址

请画图展示反向代理流程:

 

LVS与Nginx区别

LVS是四层反向代理,基于TCP和UDP协议,可用于管理Nginx集群,抗负载能力强.Nginx是七层反向代理,基于HTTP协议,用于管理真实服务器集群.

location的作用:

匹配用户请求url,根据不同请求转发到不同的服务器.

Nginx中如何配置负载均衡:

在upstream中配置多个server,在location的proxy_pass配置为http://+upstream名称

 

 

Guess you like

Origin www.cnblogs.com/qingaoaoo/p/12340917.html