jmeter results of the analysis (listeners)

Result Analysis (Listener):

1. Aggregate Report

Aggregate Report is  JMeter  commonly used in a Listener, Chinese translated as "Aggregate Report." Today there are again asked colleagues to the data in this report indicate what is meant here by the way announced it to prepare for everyone to view.

If we are doing Web application performance testing , for example, only a login request, then in Aggregate Report, the data will show a line, a total of 10 fields, meanings are as follows.

Label: JMeter of each element (e.g. HTTP Request) has a Name attribute, shown here is the value of the Name attribute

#Samples: this test that you have issued a total number of requests, if simulate 10 users, each 10 iterations, this display 100

Average: Average response time - the default is the average response time of a single Request, the Controller Transaction when used, may be displayed in units of Transaction Average response time

Median: the median, which is 50% of the user response time

90% Line: 90% response time for the user

Note: About 50% and 90%, meaning the number of concurrent users, please refer to the following

http://www.cnblogs.com/jackei/archive/2006/11/11/557972.html

Min: minimum response time

Max: Maximum response time

The total number of requested number of errors occurring in this test / request: Error%

Throughput: Throughput - the number of requests represented default completed per second (Request per Second), when using a Transaction Controller, can also represent a similar number of LoadRunner Transaction per Second

KB / Sec: amount of data from the second server receives, equivalent LoadRunner in Throughput / Sec

 

 

 

2. graphical reports

Graphical results -1.png

Meaning graph at the bottom of the following parameters:
the number of samples is the total number of requests sent to the server.
The latest figure represents a sample of time, server response time is the last request.
Throughput is the number of requests processed by the server per minute. 
Total run time divided by the average value is the number of requests sent to the server. 
Intermediate value is a number representing time, half of this value is less than the response time of the server and the other half above this value. 
Represents a deviation from the server response time, the size of the degree of dispersion of the measurement value, or in other words, is the distribution of the data.

3. Monitor results

" Monitor results " (Monitor Result) Tomcat 5 is designed to reflect the performance of real-time server, if you are not AppServer Tomcat 5, use the "monitor the results" can not get a result, but any servlet container can transplant status servelet and use this monitor, if required for other use of the Monitor's AppServer, you need to transplant the Tomcat status servelet 5.

Add JMeter of "Monitor Result" Tomcat using the characteristic of itself, it is a direct access Tomcat server / manager / status, to obtain the corresponding status data and presentation server. Thus, the addition "Monitor Results" in JMeter server to monitor the status of the following steps:

1. Sampler increase of a HTTP Request;

2. Select the new HTTP Request, modify its properties:

Modify the "Name" for the server status (non-essential)

Modify the "path" is the manager / status, if necessary, given the IP address of the server and Port values

Add a parameter name of the parameter to uppercase XML, the true value of lowercase

Select the bottom of the "used as a monitor."

As shown below:

 

 

3. Added a "http Authorization Manager" Because / manager / status access Tomcat application server needs to be given a user name and password. As shown below

 

 

4. Add a "monitor result" node

After the execution, the performance of the page will be displayed on the monitor results of FIG. In which healthy / Warning / inactive "is derived based on the number of threads available on the server / maximum number of threads available, and Load used to measure the pressure conditions of the application server.

 

 

comment: I know that there are often "see the results tree" and "look at the results in tables," I think the results of this monitor online not too much of it, most say Aggregate Report

 

other:

1. Representative polymerizable error report no response rate

2. Each test can not all of a sudden on 1000, should this go 101,002,005,001,000, the rate of increase facie wrong

3. Each case run three times and averaged to eliminate the external interference

4. For example throughput, response time, CPU load, memory usage, etc. to determine system performance

5. Pressure test (Stress  Testing ) is a system by identifying bottlenecks or can not receive performance points, to obtain the maximum service level test system can provide.

6. The purpose of concurrent performance testing is reflected in three aspects: the real business based on a representative selection, the key operational design test cases to evaluate the system's current performance;

7. At the same time record the time of each transaction, the peak data middleware server, database status

8. The main test indicators, including transaction processing monitor performance indicators and UNIX resources. Wherein the transaction processing performance indicators include transaction result, the number of transactions per minute, transaction response time (Min: minimum server response time; Mean: the average server response time; Max: maximum server response time; StdDev: deviation transaction processing server response value , the greater the deviation; median: median response time; 90%: 90% transaction processing server response time), the virtual number of concurrent users.

9. Monitor test indicators, including transaction processing performance, and UNIX ( Linux ), the Oracle , the Apache resources.

10. The need for fatigue test?

11. The benchmark reproducible results may be collected in a relatively short time. The best way to benchmark is changed each time a test and only change a parameter. For example, if you want to know whether the increase in the JVM memory will affect the performance of your application, it will be incremented JVM memory (for example, increased from 1024 MB 1224 MB, then 1524 MB, and finally 2024 MB), and collect the results at each stage environmental data, record information, and then go to the next stage. This makes it tracked in the analysis of test results. The next section I will explain what benchmarking and best parameters to run the benchmark.

12. A key benchmark is to obtain consistent, reproducible results. Note that throughput at a steady rate, and then stabilized at a certain point. Because all the threads are in use on the server, incoming requests will not be processed immediately, but placed in the queue when the thread is idle reprocessing. When the system reaches the saturation point, the server throughput remains stable, the upper limit is reached the system under given conditions. Note that, in the execution queue (FIG. 2) begins to grow, while the response time starts increasing rate. This is because the request can not be processed in time.

13. For a given test, the response should be taken and the average throughput time. The only way to obtain these accurate values ​​are loaded once for all users, and continuous operation in a predetermined period of time.

14. Correspondingly, the "ramp-up" test.

  ramp-up test users are staggered upward (every few seconds to add some new users). ramp-up tests can not produce accurate and reproducible mean value, due to the increased because each user is part of the load, the system is constantly changing. Therefore, flat run is to obtain the ideal model of benchmark data.

15. This is not to belittle the value of the ramp-up tests. Indeed, ramp-up test after test to find out the range of flat to run very useful. Advantage is ramp-up test, it can be seen as the system load changes, it is how to change the measured value. You can then choose accordingly range of flat test later to run.

16. When the test users to perform almost all of the same operation at the same time, this phenomenon will occur. This will produce very unreliable and inaccurate results, it is necessary to take some measures to prevent this situation. There are two ways to obtain accurate measurements from the results of this type. If the test can run for a long time (sometimes a few hours, depending on the user's operation duration), and finally due to the nature of random events dictates, the throughput of the server will be "leveled." Alternatively, you can select only the measurement between two points subsided waveforms. A disadvantage of this method is that the time data can be captured is very short.

17. For example, the first user can support a range of tests to determine ramp-up system. After determining the range to the different concurrent users within this range a series of load tests flat, to more accurately determine system capacity.

18. At feasible method is used on a server that different levels of load for testing, in this environment and obtain optimum load and maximum load system in accordance with the test data, and resource consumption and the load of the test data estimate, find the relationships between them.

19. For scalability testing, is usually to be analyzed concurrently according to the amount of system performance and consumption of hardware resources, and using a mathematical model obtained in a manner capacity model.

20. Test Scalability (Scalability Testing) for a system in a given environment, its optimal number of concurrent users and the maximum number of concurrent users is an objective, but the pressure in the system there may be facing on-line with the increase of time increases. For example, a number of online shopping sites, registered users growing, product information and inquiries visit the site to purchase goods also continue to increase, we should use what kind of program, without affecting the system continues to provide users with services down achieve expansion system?

21. The number of concurrent Jmeter is no upper limit (http://bbs.51testing.com/thread-264979-1-3.html)

 

comment: The above is in fact the main indicators of performance tests, as well as ideas (not the light jmeter run up on it, to know how to set the number of points to run and check of these)

 

Well, basically recorded almost, of course, the article there are many, many, but no way are recorded in the blog, here it is as a general process to record it, if later there is nothing to find, I can open a Second, huh

btw, http://shijianwu1986-163-com.iteye.com/blog/507888 (this said very detailed, very good record here)

https://blog.csdn.net/u011002547/article/details/77838479

Guess you like

Origin blog.csdn.net/m0_37477061/article/details/91489416