Common indicators of performance test

TPS

Transaction processing systems: Transaction processing systems (TPS) for transaction processing efficiency and ensure the correctness of the data (information) occurs at the new record information are generated by down to save information to OLTP ...

Response Time

Response time is the concept of a plurality of field computer, imaging displays, in the network, load refers to the load changes from a value of the step response time of the sensor. Generally defined after a step change in the amount of value for the test sensor to reach 90% of the time required for the final value. The impact on the overall network response time is done through different mechanisms. The liquid crystal display in the image field of the response time, the speed of the liquid crystal display of each pixel of the input signal response, i.e. the pixels from dark to bright or time dark required by the light switch (the principle is applied to a voltage in the liquid crystal molecules, the liquid crystal molecules are twisted and reply). Often say 25ms, 16ms refers to the reaction time, the reaction time is shorter then the user while watching a dynamic picture will not feel the end of the shadow boxes. The reaction time is generally divided into two parts: the rise time (Rise time) and fall time (Fall time), while the sum of both the subject represented.

Connections per Second

(Per connection): Statistics connecting terminal and the new connections, to facilitate understanding of the number of generated second server connection. The more simultaneous connections, the larger pool of connection to the server, when the number of connections with

When the load is stopped rising, the connection pool system is full, usually this time the server returns a 504 error. You need to modify the maximum connection to the server to resolve the problem.

CPU Utilization

CPU Utilization is easy to understand, is the CPU utilization above 75% is relatively high (there are also claims that 80% or higher). In addition to this index, but also with Load Average and Context Switch Rate of view, there may be high because the CPU is high due to two indicators.

In general, Load Average number of cores with machine-related. In a single-core machine as an example, load = 0.5 means that the CPU resource can also handle other half of the thread request, load = CPU resource. 1 indicates that all requests are processed, there are no remaining resource can be utilized, while load = 2 indicates that the CPU has to work overtime, in addition to double the thread is waiting to be processed. Therefore, the single-core machine, the ideal state, Load Average less than 1. Similarly, for the dual-core processors, Load Average less than 2. The conclusion is: multi-core processor, you should not be higher than the Load Average total number of processor cores.

Load Average, this is difficult to measure. Internet search in a circle, have not seen some reasonable explanation. I 100 concurrent users to test these two values ​​is: 77.534%, 6.108, CPU utilization is high, Load Average seems a little high. Later found the following two Bowen: Understanding Load Average good stress test, "Load Average is the CPU Load, usage status information it contains is not a CPU, but the CPU is processing and waiting for the CPU over a period of time and the number of process statistics, that is, CPU usage statistics queue length. ", explains the basic principles of multi-process, multi-thread programs. Understanding the Linux processor load mean (translation), speaking on simple sentence:

Load Average <number of the CPU core * 0.7 * Number

1 such a core CPU, Load Average <1 * 1 * 0.7; a 4-core CPU, Load Average must be <1 * 4 * 0.7 = 2.8.

View cpu information: grep 'model name' / proc / cpuinfo

Context Switch Rate. It is Process (Thread) switch, if the switch too, will switch the CPU is busy, can lead to affect throughput. "High-performance server architecture" in section 2 of this article is to say that this issue. Exactly how much is appropriate? google a large circle, not a precise explanation. Context Switch generally consists of two parts: the interrupt and processes (including threads) switch, an interrupt (Interrupt) causes a switch to create a process (thread), and the like can also cause activation of a switch. CS also value and TPS (Transaction Per Second) related to the assumption that each call will cause the CS N times, then it can be drawn

Context Switch Rate = Interrupt Rate + TPS* N

CSR lose IR, is the process / thread switching, if the main process receives the request to the processing thread, the thread processed returned to the main process, this is twice the switching. It can also be used CSR, IR, TPS values ​​into the formula, the derived number of things due to each switch. Therefore, to reduce the CSR, it is necessary to work hard on each TPS due to the switching, the value of N only go down, the CSR can be reduced, ideally N = 0, but in any case if N> = 4, will have a good check an examination. In addition the Internet, said CSR <5000, I think the standard should not be so simple.

other information:

These three indicators may be monitored in LoadRunner; Further, in linux, may be used vmstat

Guess you like

Origin www.cnblogs.com/wyf0518/p/11303201.html