What is high concurrency?

 
First, what is a high concurrent
high concurrency (High Concurrency) is one of the factors the Internet distributed system architecture design must be considered, it usually means that can simultaneously handle many concurrent requests through the design assurance system.
Some commonly used high concurrency-related metrics are response time (Response Time), throughput (Throughput), query rate per QPS (Query Per Second), the number of concurrent users.
Response Time: system time response to requests. For example, a system for processing HTTP requests needs to 200ms, the system response time is 200ms.
Throughput: the number of requests processed per unit time.
QPS: in response to requests per second. In the Internet field, this indicator and throughput distinction is not so clear.
Concurrent users: the number of users while carrying the normal use of the system functions. For example, an instant communication system, and represents the number of concurrent users of the online system amount to some extent.

Second, how to improve the system of concurrent capacity
Internet distributed architecture designed to improve the way the system of concurrent capacity, mainly in the methodology in two ways: vertical expansion (Scale Up) and horizontal scaling (Scale Out).
Vertical extensions: Single enhance processing capability. Vertical expansion there are two ways:
(1) enhanced single hardware performance, for example: increasing the number of CPU cores, such as core 32, such as Gigabit upgrade the better cards, such as the SSD update better hard disk, such as hard drive capacity expansion 2T, expand the system memory, such as 128G;
(2) single architecture to enhance performance, such as: use IO Cache to reduce the number of times, using asynchronous to increase the throughput of a single service, the use of lock-free data structures to reduce response time;

in the Internet business is growing very fast in the early If the budget is not a problem, we strongly recommend the use of "enhanced stand-alone hardware performance" approach to enhance the system of concurrent capacity, because at this stage, the company's business development strategy is often race against time, and the "stand-alone hardware performance enhancement" is often the fastest way.
Whether stand-alone hardware to enhance performance, or to enhance the performance of stand-alone architecture, has a fatal deficiency: stand-alone performance is always a limit. So the Internet distributed architecture design of high-level concurrency ultimate solution or extension.

Horizontal expansion: as long as the increase in the number of servers, you can expand the system linearity performance. Horizontal expansion of the system architecture design is required, how the layers are horizontally scalable design, architecture and Internet companies layers of horizontal expansion of common practice in architecture, we are focused on the content of this article.

Third, the common Internet layered architecture
common Internet as a distributed architecture, divided into:
(1) Client layer: the caller is typical browser browser or mobile application APP
(2) reverse proxy floor: entrance system, reverse proxy
(3) site application layer: the core implemented application logic, returned html or JSON
(. 4) service layer: If the service of the implement, it is this layer
(5) data - buffer layer: faster access cache memory
(6) data - database layer: database data storage cured

at all levels throughout the system level extensions, and are how to implement it?

Four, extend horizontally layered architecture practice
reverse spreading layer level proxy
horizontal spread reverse proxy layer, by "polling DNS" to achieve: dns-server to resolve a domain name configured with a plurality of IP, DNS resolution per request access dns-server, polls the return ip.
When nginx become the bottleneck of time, as long as the increase in the number of servers, nginx new service deployment, add an external network ip, you can extend the performance reverse proxy layer, so theoretically infinitely high concurrency.

Site level layer expansion
site layer of horizontal expansion, through the "nginx" to achieve. By modifying nginx.conf, you may be provided a plurality of backend web.
When the web has become a bottleneck when the back end, as long as the increase in the number of servers, deploy new web service, configured in nginx configuration on the new web back-end, you can extend the performance of the site layer, so theoretically infinitely high concurrency.

Extended service level layer
service layer of horizontal expansion, through the "Service Connection Pool" to achieve.
When the site downstream layer calls the service layer RPC-server via RPC-client, RPC-client in connection pool will establish a connection with a plurality of downstream services, when the service becomes the bottleneck of time, as long as the increase in the number of servers, deploy new services, in the establishment of the new RPC-client connected downstream services, will be able to expand the service layer performance, so theoretically infinitely high concurrency. If you need a dedicated service layer automatic expansion, there may need to configure the service center to support automatic discovery feature.

Level of the data layer is extended
at a large amount of data, the data layer level (cache, a database) data related to the extension, split the data originally stored on a server (a cache, a database) server up to a different level, in order to achieve the purpose of expansion of system performance.

Common mode level split layer so few Internet data to the database as an example:

according to the range of the horizontal split
(1) simple rules, service can only determine what uid routed to a corresponding range of storage services;
(2) Data balance better;
(3) relatively easy to expand, you can add a ready-uid [2kw, 3kw] data services;

insufficient are:
(1) the request does not necessarily load balancing, in general, newly registered users than the old more active users, a large range of pressures would be bigger service requests;

accordance hash horizontal split
(1) simple rules, service only to hash can be routed to the uid corresponding storage services;
(2) good balance data;
(3) good uniformity request;

Is less than:
(1) are not easily expandable, expand a data service, hash method change, it may require data migration;

It should be noted that, to expand the system performance by a horizontal resolution, and master-slave synchronization to expand the separate read and write database performance is fundamentally different way.
Extended by a horizontal split database performance:
(1) the amount of data stored on each server is the total amount of 1 / n, it will also enhance the performance of stand-alone;
data on (2) n servers there is no intersection, the server and the data set is a full set of data;
(3) the level of data split into the n servers, theoretically expanded n times the read performance, write performance is also expanded n times (in fact, far more than n times, because stand-alone data to the original amount of 1 / n);

sync from separate read and write performance by spreading the primary database:
the amount of data (1) is stored on each server, and the same amount;
data on (2) n servers are the same, are is the complete works;
(3) read performance theoretically expanded n times, is still a single point of writing, write performance is unchanged;

V. summary
high concurrency (high concurrency) is one of the factors the Internet distributed system architecture design must be considered, it generally refers to, many requests can be processed in parallel through the design assurance system.
Improve system concurrency manner, there are two main methodology: a vertical extension (Scale Up) and the horizontal extension (Scale Out). The former stand-alone vertical expansion can enhance the performance of the hardware, or stand-alone framework to enhance performance, improve concurrency, but there is always a stand-alone performance limits, Internet distributed architecture design of high concurrency ultimate solution or the latter: horizontal expansion.

Internet layered architecture, the practice at all levels there are different levels of expansion:
(1) a reverse proxy layer can extend horizontally through "DNS polling" approach;
(2) the site level can scale horizontally by nginx ;
(3) service layer may be horizontally by the service connection pool;
(4) can follow the database data range, or the data hash manner extend horizontally;

the horizontal extension of each layer embodiment, by increasing the number of servers can be manner improve system performance, so theoretically unlimited performance.



Finally, I hope the idea of the article is clear, I hope you know there is a system of concepts and practices of high concurrency, the combination of an article entitled "What exactly is the Internet Architecture" high availability "," Internet sharing distributed architecture is not progressive it's no longer a mystery?

More technical information may concern: gzitcast

Guess you like

Origin www.cnblogs.com/heimaguangzhou/p/11550154.html