Jmeter interface test and performance test

At present, the latest version has been developed to version 5.0, which requires a Java7 or higher version environment. After downloading and decompressing the directory, enter \apache-jmeter-5.0\bin\, double-click the ApacheJMeter.jar file to start JMemter.

1. Create a test task

To add a thread group, right-click the test plan, and click Add-"Thread (User)-"Thread Group in the shortcut menu. Setting the thread group mainly includes three parameters: the number of threads, Ramp-Up, and the number of cycles.

 

Number of threads: Set the number of virtual users. A virtual user occupies a process or thread. The number of threads is equivalent to the number of virtual users.

Ramp-Up: The startup time of the set number of threads, in seconds. If the number of threads is 100 and the preparation time is 20 seconds, it takes 20 seconds to start 100 threads, and an average of 5 threads are started per second.

Number of loops: The number of requests sent by each thread. If the number of threads is 100 and the number of loops is 2, then each thread sends 2 requests, and the total number of requests is 100*2=200 times. If the "Forever" check box is checked, all threads will send requests in a loop until the stop button on the toolbar is manually clicked or the set thread running time ends.

For the interface test this time, the default value is 1.

2. Add get HTTP request

Right-click the line group, and click Add-"Sampler-"HTTP Request in the shortcut menu.

Protocol: The protocol when sending an HTTP request to the target server, which can be HTTP or HTTPS, and the default is HTTP if it is not filled.

Server IP and port: Enter the target server address and port number.

Content-Encoding: Default is iso8859

Method: Select for the request method

Path: Enter the request target address

Parameter: Enter the parameter value of the query

3. Add post HTTP request

The method is the same as above, but the request method is different. This time, POST is used to transfer parameters to the server and the URL address of the POST request.

According to the interface document provided by the development, refer to the incoming parameter option and enter it.

4. Add assertion

Right-click the conference query information and add conference information respectively, add-"assertion-"response assertion

Select the response text this time, and then enter the data that needs to be matched.

5. Add view result tree

Right-click the release system project, click Add-"Listener-" to view the result tree, after running:

A green mark is displayed for success, and a red mark is displayed for failure. You can view the data returned by each use case.

6. Add a table to view the results

You can view time-consuming and byte size, result status, and use case information in more detail.

JMeter performance test

Since JMeter does not support recording well enough, the common method now is to use Badboy to record, generate JMeter scripts, then open them with JMeter, and add listeners to view the results.

Double-click the software icon to open badboy and you will see the following interface:

1. Start recording

In the address bar (the part framed in red in the figure), enter the URL of the web application you want to record, and click the red dot button to start recording. After starting the recording, you can directly operate the tested web application in Badboy's embedded browser (right side of the main interface), and all operations will be recorded in the editing window on the left side of the main interface: as shown in the figure below

export script as jmeter

The number of threads represents the number of users sending requests, and the Ramp-up period (inseconds) represents the total time interval for each request to occur, in seconds. If the number of my requests is 5, and the Ramp-up period (inseconds) parameter is 10, then the interval between each request is 10/5, which is 2 seconds. If Ramp-up period (inseconds) is set to 0, it means concurrent requests.

Finally, clear the checkbox for the number of cycles to "Forever" and enter 1. This value tells JMeter how many times to repeat your test. If you enter 1, then JMeter will only run the test once. To run your test plan continuously, select the "Forever" checkbox.

As shown in the figure below, simulate the number of 1000 concurrent users to run a login test.

2. Add monitoring

This is mainly used to view test results, which can be displayed in different forms. Here is an example of adding a listener: view results in a table, aggregate report and graphical report thread Group->Add->Listener->Aggregate Report (graphic report , use the table to view the results) as shown in the following figure:

 

 

After the program is running, you can view the corresponding test results. Here, the following report is obtained by taking the instantaneous concurrency of 1000 thread groups as an example:

The meanings of the parameters in the above chart are as follows:

 1. The number of samples is the total number of requests sent to the server.
  2. The latest sample is a number representing time, which is the time when the server responded to the last request.
    3. Throughput is the number of requests processed by the server per minute. 
    4. The average is the total elapsed time divided by the number of requests sent to the server. 
    5. The middle value is a number representing the time, half of the server response time is lower than this value, and the other half is higher than this value. 
    6. Deviation indicates server response time variation, the magnitude of the dispersion measure, or, in other words, the distribution of the data.

It can be seen that the average response time is 1630ms, the throughput is 3,940.887/minute, and the average response median value is 230ms when 1000 people are concurrent concurrently.

The meaning of the chart is explained as follows:

Label: The description is the request type, such as Http, FTP and other requests.

#Samples: That is, the number of samples in the graphical report, and the total number of samples sent to the server.

Average: That is, the average response time in the graphical report, which is the total running time divided by the number of requests sent to the server.

Median: that is, the median value of 50% user response time in the graphical report, half of the server response time is lower than this value and the other half is higher than this value.

90% line: It means that the response time of 90% of requests is smaller than the obtained value, that is, the response time of 90% of users.

Min: It is a number representing time, and it is the minimum time for the server to respond.

Max: It is a number representing the time, which is the maximum time for the server to respond.

Error%: The error percentage of the request. The number of requests with errors in this test/total number of requests.

Throughput: That is, the throughput in the graphical report, here is the number of requests processed by the server per unit time, pay attention to whether it is seconds or minutes. By default it means the number of completed requests per second.

KB/sec: The amount of data received from the server per second.

To obtain correct data, the aggregated report error% must be 0.00%, otherwise it means that all users have not passed the test. Here, the average response time is 1630ms, and the median value of the average response is 230ms.

An error occurred in the error in the above figure, and the obtained data is not accurate. According to the log and table, the connection request timed out in the middle process, and the response time for the number of received requests is 0 within the response time. As shown below:

During the test, the average response time, throughput, and the number of concurrent connections are an important measure of our performance test, but in the test, especially in the aggregation report, the 90% Line obtained is equivalent to the 90% line proposed by the user % response time, this value is also very valuable for our performance test analysis. 90% response time means that among the sent requests, 90% of the user response time is shorter than the obtained value. It also shows that when a system is in use, 90% of the user response time can reach this value, then It provides a good reference value for system performance analysis.

If the number of threads is changed to 500 concurrent connection requests, Error will not appear as a result, and the obtained performance test data is accurate.

In short, it is necessary to continuously optimize the comparison data, approach the optimal state, and determine the number of concurrent loads.

Finally, I would like to thank everyone who has read my article carefully. Reciprocity is always necessary. Although it is not a very valuable thing, you can take it away if you need it:

These materials should be the most comprehensive and complete preparation warehouse for [software testing] friends. This warehouse has also accompanied tens of thousands of test engineers through the most difficult journey, and I hope it can help you! Partners can click the small card below to receive  

Guess you like

Origin blog.csdn.net/OKCRoss/article/details/131143034