Interface performance test report

1 Overview

1.1 Performance test concept

Performance testing is to test various performance indicators of the system by simulating various normal peak and abnormal load conditions through automated testing tools. Load testing and stress testing are both performance testing, and the two can be combined. Determine the performance of the system under various workloads through load testing. The goal is to test the changes in various performance indicators of the system when the load gradually increases. Stress testing is a test to obtain the maximum service level that the system can provide through a system bottleneck or unacceptable performance point.

1.2 Purpose of performance test

The purpose of performance testing is to verify whether the software system can achieve the performance indicators proposed by the user, and to find the performance bottlenecks in the software system to optimize the software, and finally to optimize the system.

1.3 Performance test objectives

From the perspective of safety, reliability, and stability, find out the performance defects, and find out the system best to withstand the number of concurrent users, and the long-running load under the number of concurrent users, such as 100 concurrent users, how to optimize the system

The performance test mainly includes the following aspects:

(1) Evaluate the ability of the system: The load and response time data obtained in the test can be used to verify the ability of the planned model and help make decisions.

(2) Identify the weaknesses in the system: the controlled load can be increased to an extreme level and break through it, so as to repair the bottleneck or weakness of the system.

(3) System tuning: Repeatedly run the test to verify whether the adjustment system has achieved the expected results, thereby improving performance.

(4) Detect problems in the software: long-term test execution can cause the program to fail due to memory leaks, revealing hidden problems or conflicts in the program.

(5) Verify stability and reliability: Performing a test for a certain period of time under a production load is the only way to assess whether the system stability and reliability meet the requirements.

1.4 Common classifications of performance testing

(1) Load test: Load test refers to discovering design errors or verifying the load capacity of the system by testing the performance of the system under resource overload conditions. In this kind of test, the test subject will be made to undertake different workloads to evaluate and evaluate the performance behavior of the test subject under different working conditions and the ability to continue normal operation. The purpose of the load test is to determine and ensure that the system can still operate normally when the maximum expected workload is exceeded. In addition, load testing also evaluates performance characteristics, such as response time, transaction processing rate, and other time-related performance indicators.

(2) Stress testing: In software engineering, stress testing is a test that continuously puts pressure on the system. It is a test to obtain the maximum service level that the system can provide by determining the bottleneck or unacceptable performance points of a system. For example, when testing a web site under heavy load, when the system response will degrade or fail.

(3) Capacity test: The capacity test determines the maximum number of simultaneous online users that the system can handle.

2 System introduction

2.1 Reference materials

Performance related test data

2.2 Tests to be done

pressure test

Benchmarks

2.3 Test environment

(1) Hardware environment

(2) Software environment

3 Test indicators

Test time: October 10th to October 11th, 2018

Test scope: performance test of various performance indicators of the server when the client requests information

3.1 Jmeter indicators

(Because the performance indicators collected by Apache's performance testing tool jmeter are less, the following data selects representative indicators)

(1)Averge/ms: average response time of server processing transaction (representing the time from client request to server processing information and feedback to client)

 

Response time = presentation time + network transmission time + server-side response time + application delay time

There is a universal standard for user response time on the Internet. 2/5/10 second principle. In other words, responding to customers within 2 seconds is considered a "very attractive" user experience. Respond to the user experience that the customer is considered "good" within 5 seconds, and respond to the user experience that is considered "bad" within 10 seconds. If there is no response for more than 10 seconds, most users will think that the request is a failure.

(2) Throughput/s: the number of requests processed by the server per second (indicating the number of client requests processed by the server per second (unit: pcs/sec))

(3) Error%: the rate of sampling errors

(4) KB/s: The data traffic received by the server per second (indicating that the server receives the data volume requested by the client per second in KB)

3.2 Hardware indicators

(1) %Processor time: CPU usage (average lower than 75%, lower than 50% is better)

(2) System: Processor Queue Length: the number of threads in the CPU queue (each processor is less than 2 on average)

(3) Mermory: Page/sec: the number of memory error pages (average less than 20, less than 15 is better)

(4) Physical Disk-%Disk time: Disk usage rate (average less than 50%)

4 Testing tools and testing strategies

Test tool: Apache-jmeter

Test strategy: According to the actual situation of the company and the distribution of the business, set the number of concurrent users

Test data: registered users, store data, product data

Data description: The selected data are representative data

Test scenario: 1), visit the airport east hall, west hall store, and store merchandise

                  2), business full link test: register, log in, add credit card, submit order, pay

                  3) Order query

5 Test result data and screenshots

Prerequisite: Assuming that the number of users is 50 users, concurrent access to the background

5.1 Jmeter performance indicators

5.1.1 Store visit test results and analysis

Overall test situation:

index

test value

Stress test duration

20 minutes and 10 seconds

Concurrency

100 threads (equivalent to 100 virtual users)

Average response time

10556 ms

Maximum response time

41595 ms

Throughput

7.3/sec

Fault tolerance

0.01%

Details of each test:

1.Average/ms

data analysis:

This graph shows the average response time for the server to process requests

The best performance is that as the number of concurrent users increases, the average transaction response time is relatively flat

This figure clearly shows that as the number of concurrent users increases, the transaction response also increases

 

2.Throughput/s

Data analysis: This figure shows the number of requests processed by the server per second

The number of requests processed by the best performance server increases with the increase of users

This figure can visually see that the number of requests processed by the server has not increased with the increase of users

 

3. Total number of requests and number of user images

data analysis:

The average response time exceeds expectations. When there are more than 60 concurrent users, the user experience is bad.

 

5.1.2 Test results and analysis of order submission and payment

Overall test situation:

index

test value

Stress test duration

19 minutes and 17 seconds

Concurrency

30 threads (equivalent to 30 virtual users, the actual order and payment concurrency is only 20)

Average response time

15673 milliseconds (invoice+pay)

Maximum response time

27040 milliseconds (invoice+pay)

Throughput

2.2/sec(invoice+pay)

Fault tolerance

1.62%(invoice+pay)

Details of each test:

1.Average/ms

data analysis:

This graph shows the average response time for the server to process requests

The best performance is that as the number of concurrent users increases, the average transaction response time is relatively flat

This figure clearly shows that as the number of concurrent users increases, the transaction response also increases

2.Throughput/s

Data analysis: This figure shows the number of requests processed by the server per second

The number of requests processed by the best performance server increases with the increase of users

This figure can visually see that the number of requests processed by the server increases with the increase of users

3. Total number of requests and number of user images

pay接口错误返回:{"code":"102","message":"Reject by processor: Error 1062: Duplicate entry '1000-153924520194' for key 'acq'"}

5.1.3 View order test results and analysis

Overall test situation:

index

test value

Stress test duration

15 minutes 19 seconds

Concurrency

50 threads

Average response time

9467 ms

Maximum response time

30073 ms

Throughput

3.9/sec

Fault tolerance

0.03%

Details of each test:

1.Average/ms

data analysis:

This graph shows the average response time for the server to process requests

The best performance is that as the number of concurrent users increases, the average transaction response time is relatively flat

This figure clearly shows that as the number of concurrent users increases, the transaction response also increases

2.Throughput/s

Data analysis: This figure shows the number of requests processed by the server per second

The number of requests processed by the best performance server increases with the increase of users

This figure can visually see that the number of requests processed by the server has not increased with the increase of users

3. Total number of requests and number of user images

5.2 Hardware indicators

Observing the two main servers in the background, the CPU usage is between 50% and 65% most of the time, reaching 70% to 85% at the peak, and the highest peak is 92.2%.

6 Test conclusion

6.1 Jmeter performance index analysis

The most intuitive performance indicators of Jmeter show the lack of network performance, which objectively reflects that there is room for optimization in server processing capabilities.

1. Store visit test, mainly to simulate the scenario where a user visits an airport store without logging in. Simulating 100 users at the same time non-stop querying store information, under the load test, the response time is too long.

2. Order submission and payment test, mainly to simulate the scenario where 20 users place concurrent orders at the same time. The throughput is 1.1/sec. Compared with the back-end invoice and pay interfaces, both interfaces can only process one request per second. Need to evaluate whether there is room for optimization

3. Query order interface test, which mainly simulates the scenario where the client requests an order interface every 2 seconds. The response time is also too long. In this way, when the order status changes, if you keep requesting, there may be constant switching between the new and the old status, so the response time will not keep up with the time interval of 2 second requests. Can the frequency be adjusted? .

Optimization proposal: to be determined

6.2 Analysis of server hardware information monitoring data

Combine Jmeter performance indicators and multiple hardware monitoring graphs

Optimization suggestion: none

Guess you like

Origin blog.csdn.net/grl18840839630/article/details/111592830