Some basic concepts and principles of performance

参考:《Thinking Clearly about Performance》

 

two indicators

Response time : the time it takes to execute a task

Throughput : The number of tasks executed in a specified unit of time

 

 

Response time and throughput are "generally" reciprocal

(The real relationship is more complicated)

> If the throughput per second is 1000, the average response time is not necessarily 0.001 seconds. Because there may be 1000 parallel channels, each channel executes a task, the response time is 1 second.

 

> If the response time is 0.001 seconds, the throughput per second may not reach 100. Because requests may come from different users, requests are not perfectly continuous, and there may be various reasons such as resource competition.

 

So both have to be measured .

 

 

Average response time doesn't tell the whole story

With the same average:

> If each response time has a large difference (large variance), it may cause the business to not tolerate those times longer than the average.

 

> But it is also possible that the business requires extremely short response times for certain tasks, while allowing individual tasks to take slightly longer.

 

problem diagnosis

First of all, determine the specific expected performance goal, and make sure that this goal is what the user really understands and expects.

> Show the process with sequence diagrams

 

> Use a professional measurement and evaluation tool (Profiler) to find out which part of the time is spent and which part is acceptable. Based on the Profiler results, estimate the performance goals that should be achieved.

- Amdahl's Law

The degree of system performance improvement that can be achieved by implementing a faster implementation of a component in a system depends on how often this implementation is used, or as a percentage of the total execution time.

 

- Assess the cost of each improvement. The evaluation may be inaccurate, and there may be some key points that are not considered, which may break the original design and cause performance problems in other parts.

 

- Record the history of improvements. Including expected effects, actual results, etc.

 

- The performance of a single subroutine is not evenly distributed in different scenarios. Modifying some subroutines that are called a lot may not have the desired effect. It may be possible to reduce the time spent on a single call by half, but since the situation of each call may vary greatly, the overall time spent on these calls may not necessarily be reduced by half.

 

- effectiveness

* Focus on where your business needs to improve the most.

* Remove redundant operations/calls without sacrificing business functionality.

* Improve the environment around the program (unreasonable design, hardware configuration, etc.). (reduce system workload if necessary)

- load

(one of the reasons developers can't detect all performance issues)

* Queuing Delay

Each piece of hardware (CPU, memory, disk, etc.) has its own performance inflection point.

* Coherency Delay

Delays caused by consistency, including various locks. (It is difficult to guarantee that performance tests have verified that such delays do not affect the final service.)

 

 

Measurement methods

(Throughput is easier to measure, and response time is more difficult to measure accurately.)

It is better to measure what is really needed, rather than easy-to-measure surrogate data.

 

 

Performance can only be truly reflected in the product environment

The previous stage should consider the ease of performance testing, adding code to collect performance-related data.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326421104&siteId=291194637