Project Practice 3-Performance Test Case Execution

Insert image description here


1. Preparation of performance test environment

1. Performance test environment: server configuration

1. Hardware model: try to be as consistent as possible
2. Number of servers:
There may be more than 20 machines in the production environment, and large companies even have Thousands of machines. Are the entire number of servers listed in the performance test environment? Not required

Benchmark test:
The same can be said, and so on,
For example: 1 machine can carry 1000/s concurrency; theoretically Saying that 1,000 machines can carry 100,000/s concurrency
does not strictly require the number of servers in the performance test environment.

2. Data preparation

a. Pure database SQL
b. Use code to generate data, imported SQL statements: more commonly used
Tips:

Derived from existing data,
randomly generated

c. Use code/tools to call interface/UI automatic generation

Notes:
a. Pay attention to the status distribution of data
b. For example, order data - multiple statuses (try to fit the distribution of the production environment Situation)
Data distribution determines the performance at the database level
c. Extreme data situations?
Performance testing is based on normal scenarios with a large number of users (users are scattered). If the performance scenario is divorced from the business scenario, such performance testing is not necessary.

2. Performance test execution tool (Jmeter)

Version: 5.4.1
Depends on java environment: JDK1.8
Third-party plug-in:

Put it in the following directory
apache-jmeter-5.4.1\lib\ext

Thread model: multi-thread parallel execution
Number of threads: how many virtual users are working at the same time, and the content of the work is the content in the thread group
Multiple threads will start at the same time
The same thread will execute the tasks in the thread group: it is orderly

1. User-defined variables:

Insert image description here

2. Transaction controller:

Usage scenario: When the scenario we test contains multiple interfaces, overall data statistics need to be done
Multiple operations can be counted as one transaction and it is cost-effective a>

Insert image description here
If any operation in the transaction fails, the entire transaction status will execute abnormally; although most interfaces in the entire transaction have no problems, only one interface has an error, and the transaction execution fails.

3. View test results

a. Summary report

Disadvantages: Only the final results can be viewed, the process cannot be seen, and it is impossible to analyze at which stage the system has reached the performance inflection point.
Insert image description here
Insert image description here

Insert image description here

Label

Interface name

sample

Number of interface requests

average(ms)

Take the first interface as an example: a total of 1,000 requests are initiated, and each request is added and divided by 1,000 to get the average response time.

Minimum value (ms):

The minimum length of the first interface is 10ms; the minimum length of the second interface is 1ms, and so on.

Maximum value(ms)

The first interface has a maximum length of 115ms; the second interface has a maximum length of 15ms, and so on.

standard deviation

Indicates whether the float is large, that is to say: the difference in response time between the first request, the second request, etc.

abnormal%

Number of interface request errors/total number of requests

Throughput

The number of completed requests per second, the entire process from jmeter issuing a request to the server processing the request and returning it to jmeter is 1——》Business throughput

Homepage-Click on the hot list interface, the system can handle a maximum of 28.6 requests per second
For the entire transaction, the system can handle a maximum of 19.2 requests per second

Throughput is not divided into tps and qps
tps: data change scenario
qps: data query scenario

Receive KB/sec:

Network reception speed——》Network throughput

Send KB/sec:

Network sending speed——》Network throughput

Average number of bytes

network throughput

b. Gradient thread group

Insert image description here

c. Graphics plug-in

3. Use case execution process

1. Benchmark test

Scenario 1: Online server resource planning
Performance benchmark: very little concurrency to test the resources required for each user operation and performance indicators
Benchmark test result analysis:
Insert image description here
1024KB——》1MB
1024MB——> 1GB
Receive KB/sec: 3341.10KB/sec: means that the network bandwidth is 3M, and the concurrency can reach 40 requests per second
1. According to the network throughput (reception) - if the server bandwidth cannot support 3M transmission per second If the data is about 40/s, the server cannot achieve 40/s throughput
2. One thread simulates 40 concurrencies. If you want to simulate 4000/s concurrency, theoretically 100 threads are needed to simulate virtual users

2. Load test

Insert image description here
Continuously increase system concurrency pressure until the system cannot meet our performance requirements

Response time
Throughput
Resource usage

Scenario 1: The online concurrency is expected to reach 4000/s. Can the system withstand it?
4000/s: Obtained during performance requirement analysis.
Gradient Pressure Test——》Method

Note: When the load test results are different from the estimated results, adjust the number of threads
For example, throughput, theoretically: the system throughput curve

a, Discharge volume model:

Review the bank example and compare the bank to a system project

The bank has 10 teller windows (the windows will not increase because the system resources are limited), and it takes 1 second to process a transaction each time
The first second came 1 Individuals, the throughput is 1
In the second second, 5 people came, and the throughput was 5
In the third second, 8 people came, and the throughput was 8 a> In the fifth second, 20 people came, and the throughput was 10, and the remaining people were waiting in line in the lobby< /span>
In the fourth second, 12 people came, and the throughput was 10

20 people came in the fifth second, and the first s were processed quickly. The next 10 people had to wait 1 s to be processed -> the response time increased; the throughput remained the same. This does not mean there is a problem with the system.
The bank promises to complete all requests within 3 seconds. If it exceeds 3 seconds, the bank system needs to be upgraded.

Insert image description here
Insert image description here

Based on the above: throughput remains the same, response time will definitely increase

first possibility

The more people followed, the bank had no place to stand - it was crowded and there was no air to breathe
Eventually, the bank teller had no air to breathe and one bank teller fainted. , fainted 2...10, the system collapsed. ——》The air and hall here are system resources.

Insert image description here

Second possibility:

When the number of people reaches a certain number, the bank security will close the door directly and no further entry will be allowed.

Expected phenomenon:
The first stage: the concurrency increases -> the throughput will also increase
The second stage: the concurrency increases ——》The throughput will remain flat and will not increase, but the response time will become longer

The throughput remains unchanged, and it is considered that the system cannot handle it, and new requests will be waiting/backlogged (the backlog will occupy resources, resources are limited, there will not be an infinite backlog, and it will eventually crash)

The third stage - crash: concurrency increases -》
1. Throughput decreases,

The system continues to backlog requests and resources are insufficient.

2. Increased request error rate

Unable to request, request timed out
The system rejects new requests

Insert image description here
Benchmark summary report
1 thread——》Throughput is 40/sec
Insert image description here

Idea: According to the benchmark test results, during the load test: the response time should be consistent with the benchmark test, and the throughput should be doubled.
Load test summary report
60 threads——》The throughput is 134/sec, theoretically 40/sec*60, which does not reach the ideal point a>

Insert image description here
Conclusion:
The interface is slow

4. Performance Testing Area-Segmentation Concept

Benchmark: Collect the performance baseline of the system in extremely low concurrency scenarios
Why is the number of benchmark threads 1?

Requirements for the number of threads: When the system can bear 100% of the load (it can be 2, it can be 10), 1 is the smallest simulation unit</font>

Load test: The purpose is to discover the actual processing power of the program system
Number of load test threads

According to the benchmark test, how many times each thread can simulate concurrency per second. For example, in the benchmark test, one thread can simulate 40/s concurrency
Goal: Test whether the system can handle the load 4000/s
Preliminary number of threads = target concurrency/single-thread simulation concurrency number

5. jmeter related

means starting 100 threads to work in 1 second and running it once
Insert image description here
The specific work content is:
Insert image description here
3 people work 3 times, Did it 9 times in total
These three people worked at the same time
Number of cycles: Indicates the number of times each thread needs to work
Insert image description here

Insert image description here

jmeter execution order
1. For the same thread, http requests are sequential, execute task 1 first and then execute task 2

Insert image description here
2. If there are three threads doing these two tasks at the same time, there is no order.
Maybe thread 1 has executed task 2, and thread 2 started to execute task 1

Insert image description here

Guess you like

Origin blog.csdn.net/YZL40514131/article/details/134823234