High concurrency analysis

table of Contents

concept

Highly concurrent systems usually refers to the services we offer can handle many simultaneous requests.

Under understand the concept:

  • QPS (TPS): per reques / number of transactions, index (http request) respond to requests per second;
  • Throughput: number of requests processed per unit time (typically determined by the number of concurrent QPS);
  • Response time: the response time of the system to make the average to a request. For example, the system needs to handle a HTTP request 200ms, 200ms is the response time of the system (I think here should contain only the processing time, network transmission time ignored).
  • Concurrency means that at some point the number of simultaneous accesses arrival, the ability to handle multiple tasks, not necessarily at the same time
  • The same time in parallel a plurality of cpu core event occurred, and transmits a single period of alternating core cpu events

QPS = number of concurrent / Response Time

28 principles: 80% of the volume of business done in 20% of the time

Scenes

Assume that the total number of concurrent requests is 10000, the processing time for each request is t seconds, the number of server requests can be processed one time is n, then all requests processed as needed when T

T = (10000 / n ) * t

In turn, the calculation of how many requests per second

QPS = (1 / t ) * n

Suppose we concentrate 80% of visits per day at 20% of the time (peak), if there are 3 million pv every day, and our single machine QPS 58, running on a single machine (of course, often down), according to the above the system performance data are given optimization solution.

qps = ( 300W x 0.8 ) / ( 24x3600x0.2 ) = 139

Option One: adding machine

Now that a machine can not handle, we have more than a few machines. This involves the master db from separate read and write, load balancing techniques.

Its principle is to shunt the previously concentrated pressure spread out. Reform program quick, flexible, practice up faster.

139/58 is needed on the machine table

Option Two: single performance increase

Performance can be added to a single in the end to what extent, depending on the configuration of your machine, also it depends on how complex your service in the end.

Common example: improved machine cpu, memory (at the same time the system can increase the number of processing requests); PHP caching enabled opcache; using a variety of data caching; code performance optimization; database optimization; use of permanent memory technology;

analysis

Now suppose you need to design a system to analyze the current situation qps product qps each interface, and the current system.

Request Number qps = period / period

Suppose we come every hour or per minute operation. First, each log request includes a response time, in response to the result (each request with a unique identification mark) to the log file, the log files reach a reading process, the system sends the data to the queue, the other queues consumption , ground to a database or nosql, next

Is the analysis of these data, it is not difficult to analyze qps, such as a global logging system can also log system, for the company's various business lines query log, can also play a role in monitoring early warning (for example, request a timeout, abnormal surge in traffic ). Why not just put in storage analysis, it does not cause performance problems under large flow of such programs.

[Feek specified row pointer; Get fgets row of content; feof reached the end judgment is not]

Guess you like

Origin www.cnblogs.com/followyou/p/10991462.html