Performance Testing Learning Path (c) jmeter common performance metrics (Aggregate Report Server Performance Monitoring Configuration && && && graphical results summary report)

1 Performance test purposes

Performance testing purposes: Verify that the software system is able to achieve performance raised by users, also found that the performance bottlenecks exist in the system software to optimize the software.

Finally, the purpose of optimizing system play performance tests include the following aspects:

1. Assessment of the ability of the system: the load and length data obtained in response test can be used to verify the ability of the model program and to help make decisions

2. Identify the system's weaknesses: a controlled load can be increased to an extreme level and break it, to repair the system bottlenecks or weak place

3. Tuning system: to repeat the tests, adjustment system to verify whether the activity expected results, thereby improving the performance of

Problem detection software : test execution periods can lead to failure due to a memory leak caused by the occurrence of a program, the program reveals the potential problem or conflict

4. Verify the stability (Resilience), reliability (Reliability): Test performed in a production load - is the only way a certain time stability and reliability evaluation system meets the requirements of

2 common classification performance test

Performance tests include load testing, strength testing, capacity testing

2.1 Load Test ( the Load Testing )

Load testing is through the test system performance under overload conditions resources, to discover errors or verify the load capacity of the system design.

Load goal of the test is to determine and ensure the normal operation of the system can still load testing evaluates the performance characteristics in the case of exceeding the maximum expected workload, such as long response time, transaction rates

2.2 strength test ( Stress Testing ): Stress Test

Tests Stress Test constant pressure on the system is a system by identifying bottlenecks or can not receive performance points, to obtain the maximum service level test system can provide.

Such as testing a web site at large load, when the response of the system may be degraded or fail.

2.3 Capacity Test (Volume Testing)

Determining capacity test system can handle the maximum number of concurrent users

In response to these load testing, stress testing, capacity testing example
example: a person back X pounds
load test: under 200 pounds circumstances, whether to adhere to five minutes
Stress Test: 200,300,400 pounds ... under circumstances, his performance, when failed, what performance after the failure, re-carry 200 is normal
capacity test: adhering to the case of five minutes, he can carry a maximum number of pounds

Common indicators 3 Performance tests

Two common framework for software testing: B / S, C / S

3.1 B / S architecture - common performance indicators

For the B / S structure of the software, usually focus on the following Web server performance

concept

Explanation

Avg Rps

Average response times per second = total number of requests / seconds

Avg time to last byte per terstion(mstes)

The average number of iterations per second service script

Successful Rounds

Successful request

Failed Rounds

Failed requests

Successful Hits

Successful clicks

Failed Hits

Failure clicks

Hits Per Second

Hits per Second

Successful Hits Per Second

Clicks per second success

Failed Hits Per Second

Failure clicks per second

Attempted Connections

The number of connection attempts

Throughput

Throughput

 

3.2 C / S architecture - common performance indicators

For C / S structure of the program, because the software is usually database back-end, so we pay more attention to test indicators database

In addition to the concept of a table inside, there are some indicators: the CPU usage, memory usage, database connection pools

concept

Explanation

User Connections

User connections, that is, the number of connections to the database

Number of Deadlocks

Database deadlock

Butter Cache Hit

Hit the database of Cache

4 performance test results analysis

4.1 How to analyze performance test results

  1. Analysis of the duration of the performance test execution, test environment is stable and normal.

For example, during a test run Jmeter CPU usage and often reaches 100% (or high memory usage), the test result in network congestion response delay, test system configuration error parameters (JDBC connection pools, etc.) .....

   2.  Check the jmeter test script parameter set is reasonable, check jmeter mode of operation is reasonable.

For example, the parameters of the thread group Ramp-Up Period (in seconds) is set to 0 or 1, jmeter will instantly start all users in the virtual thread group, can cause great pressure test server, the server ranging from lead to long response long, while resulting in some virtual users to wait for a response timeout error.

   3.  check whether the test results revealed a system bottleneck.

 Performance Measurement and Analysis of principle: outside to the inside, from the inside out, unraveling

 

Response length, including two parts: a long response time of the server (application server, database server response length), the network response duration.

 

4.2 How to help listeners discover performance deficiencies

4.2.1 graphical results (Graph Results)

concept

Explanation

The number of samples

Total number of requests sent to the server

The latest sample (black)

Figures representing time, server response time is the last request (response current sample duration)

Throughput (green)

The actual number of requests processed by the server per minute

The average (blue)

(The average duration of the sampling response) Total run time divided by the number of requests sent to the server

Intermediate value (purple)

Figures representing time, half of this value is less than the response time of the server and the other half of the above value

Deviation (red)

Server response time, the size of the degree of dispersion of the measurement value, or in other words, is the distribution of the data

Testers can increase or decrease the number of concurrent threads script delay, to find the maximum throughput supported by the system.

Graphical results of the comparison with a reference value in the field is: average deviation throughput.

4.2.1.1 the average

Concurrent with the increasing pressure, and prolonged changes in system performance occurs. Under normal circumstances, the average length of the sample response curve should be smooth and substantially parallel to the lower border pattern.

There may be performance issues average:

average jumped at the initial stage and then gradually up smoothly.

 

 

 First, the system performance deficiencies exist, need to be further optimized in the initial stage, such as a database query slow

Second, the system has a caching mechanism, and the performance test data did not change during the test, this way the same data length is certainly slow in response to the initial stage; it belongs to question the performance test data preparation, not performance deficiencies, after adjustment in need test

Third, the system architecture inherent phenomenon caused by, for example, in the system after receiving the first request was to build an application server to link to a database, the connection does not release the follow-up within a period of time.

average value continues to increase, the picture becomes steeper and steeper

First, there may be a memory leak, this time by monitoring system logs, a common method for monitoring the application server status, to locate the problem.

 

the average during the performance test, a sudden jump, then return to normal

One possible system performance deficiencies

Second, the test may be due to the environment caused by instability (check the status of [application server CPU consumption, memory consumption] or to check whether there is a test environment network congestion)

 

4.2.1.2 deviation

Long sample standard deviation when viewed in response, may determine whether a uniform distribution of the data. Ideally smooth.

4.2.1.3 Throughput

The actual number of samples processed per minute server. First, by increasing or decreasing the number of concurrent threads script delay, to find the maximum throughput supported by the system. Then the maximum throughput of the system and the actual support is expected throughput compared in order to verify the performance of the system to meet user needs.

4.2.2 asserts results (Assertion Results)

 

The results will show each trying to assert sampling carry a label, if the assertion in question, will be displayed.

4.2.3 View Results Tree (View Results Tree)

View Results tree will show all the way to the tree in response to the results of the sampling, test personnel can respond to any sample to see through it.

View the number of results is generally used for debugging performance test scripts. Can also query result, using regular expressions, extracts data from the response results.

4.2.4 see the results in tabular

Each row is created for sampling results, but will take up more memory.

 

In the "Show result with table", the sampling time test can be seen, when the system response length, number of bytes, and can accurately determine the sampling timing problems occur.

4.2.5 Report polymerization (Aggreate Report)

 

Part of the concept as follows:

concept

Explanation

Label

Request type is described, such as Http, FTP, etc. Request

#Samples

That is, the number of samples in a graphical report, the total number of samples sent to the server

Average

I.e. average graphics report, the total run time divided by the number of requests sent to the server (average response time)

Median

I.e. the intermediate pattern report value, is a number representing time, half of this value is less than the response time of the server and the other half of the above value

90%line

Response time refers to the 90% smaller than the requested value resulting

me

Is a number representing time, server response time is the shortest

Max

Is a number representing time, server response time is the most

Error%

The percentage of error request

Throughput

I.e. a certain pattern in the report, where the server is the number of requests processed per unit time, in seconds or minutes to see note

KB/sec

Is the number of (the amount of data transmission) requested bytes per second

Throughput

I.e. a certain pattern in the report, where the server is the number of requests processed per unit time, in seconds or minutes to see note

 

The polymerization system performance reports comprehensive judgment meets the requirements. Several more concerned about statistics, including mean long response time, 90% of the threshold, the throughput (request per second is completed), the error rate.

 PS: 90% line definition:

A set of numbers arranged in ascending find his first 90 percent of the number (if it is 12), then the array will be 90% of the number of 12 or less.

用在性能测试的响应时间也将非常有意义,也就是90%请求响应时间不会超过12 秒。

4.2.6聚合图形(Aggregate graph)

聚合图形与聚合报告一致,区别在于聚合图形有生成的图表。

 

4.2.7概要报告(summary report)

 

概念

说明

Label

采样标签

#Samples

标签名相同的总采样数

Average

请求(事务)的平均响应时间

Min

标签名相同的采样中,最小的响应时长

Max

标签名相同的采样中,最大的响应时长

Std.Dev.

采样响应时长的标准差

Error%

事务发生错误的比率

Throughput

该吞吐率以每秒/分钟/小时发生的采样数来衡量(TPS)

KB/sec

吞吐量以每秒KB来衡量(即每秒数据包流量)

Avg.Bytes

以字节为单位的采样响应平均大小(平均数据流量)

 

4.2.8服务器性能监控设置

    JMeter使用plugins插件进行服务器性能监控,对服务器的 CPU、内存、Swap、磁盘 I/O、网络 I/O 进行监控。客户端添加CPU监控Permon Metrics Collector,服务器端开启代理,在HTTP请求的时候,就能进行服务器资源的监控情况。

4.2.8.1服务器插件下载

访问网址https://jmeter-plugins.org/downloads/old/,下载三个文件。其中客户端插件JMeterPlugins-Standard和JMeterPlugins-Extras(目前我使用的版本是1.4.0),

 

访问网址https://jmeter-plugins.org/wiki/Start/,下载服务器端插件ServerAgent的。

 

4.2.8.2插件客户端配置

解压客户端的两个文件,进入其路径JMeterPlugins-Extras(Standard)-1.3.1\lib\ext,复制JmeterPlugins-Extras.jar(JmeterPlugins-Standard.jar)两个文件,放到JMeter客户端的lib/ext文件夹中,打开JMeter,可在监听器中看到Permon Metrics Collector,客户端配置成功。

 

4.2.8.3插件服务器端配置

ServerAgent-2.2.1.jar上传到被测服务器,解压,进入目录,Windows环境,双击ServerAgent.bat启动;linux环境执ServerAgent.sh启动,默认使用4444端口,出现如下情况

即服务端配置成功。

11-4.Permon Metrics Collector监控的执行

 

Guess you like

Origin www.cnblogs.com/wendyw/p/11626915.html