How to use Jmeter for performance testing

Table of contents

The concept of performance testing

performance test type

Performance testing application scenarios (fields)

Commonly used indicators for performance testing

Performance testing process

demand analysis

Set up a test environment

Test Scenario Design

Test case design and script development

Test data preparation

Performance test execution and management

Performance test result analysis and tuning

Test Reporting and Tracking


The concept of performance testing

Definition: The performance of software is a non-functional characteristic of software. It is not concerned with whether the software can complete a specific function, but the timeliness displayed when the function is completed.

It can be seen from the definition that performance focuses on the non-functional characteristics of the software, so generally speaking, the time to intervene in performance testing is after the completion of functional testing. The performance test will only be carried out when the basic function test of the system is verified and the system tends to be stable, otherwise the performance test is meaningless. In addition, the timeliness in the definition shows that performance is also an indicator, which can be measured by time or other indicators. Usually, we use certain tools or means to detect whether certain indicators of the software meet the requirements, which is performance testing.

Definition of performance testing: refers to testing various performance indicators of the system by simulating various normal, peak and abnormal load conditions through automated testing tools.

jmeter performance test video tutorial: 2023 latest detailed explanation of the whole process of the jmeter performance test project of a large factory, quietly collected, you will not see it later_哔哩哔哩_bilibili icon-default.png?t=N6B9https://www.bilibili.com/video/BV1Vu411L77o/? spm_id_from=333.999.0.0

 

performance test type

  • Benchmark test: When applying low pressure to the system, check the operating status of the system and record the relevant numbers as a basic reference
  • Load test: refers to continuously increasing the pressure on the system or increasing the duration under a certain pressure until one
    or more performance indicators of the system reach a safety critical value, and the system is continuously pressurized to reach the bottleneck, providing reference data for tuning .
  • Stress test:
    (1) Stability stress test: Under different given conditions (such as memory usage, how many requests there are in a certain period of time, etc.), the processing and response capabilities of the system (the fault tolerance of the system will be considered here) ability, recovery ability)
    (2) Destructive stress test: pressurize continuously until the system crashes and hangs up, to find out where the maximum bearing capacity of the system is
  • Stability test: When a certain business pressure is applied to the system, the system is run for a period of time to check whether the system is stable.
  • Concurrency test: Test whether there are deadlocks or other performance problems when multiple users access the same application, the same module or data records at the same time,
  • Failure recovery test: for system design with redundant backup and load balancing, to detect whether the system can continue to be used if a partial failure occurs in the system
  • Configuration test: Through the adjustment of the software and hardware environment of the system under test, understand the degree of impact of various environments on system performance, so as to find the optimal allocation principle of various system resources

image.png

Performance testing application scenarios (fields)

Performance test application scenarios (fields) mainly include:
capability verification, planning capability, performance tuning, defect discovery, performance benchmark comparison,

The following table briefly introduces and compares the respective uses and characteristics of these scenarios:

image.png

The following table shows the relationship between performance test application areas and test methods:

image.png

Commonly used indicators for performance testing

1. Response Time

Definition: The time from when the user sends a request to when the user receives the response data returned by the server is the response time

Calculation method: Response time = (network time + application processing time)

Reasonable response time 2/5/10 (response to the customer within 2 seconds is considered very attractive by the user, within 5 seconds, it is bad, within 10 seconds, poor user experience, more than 10 seconds, request fail)

Response time-load correspondence:

image.png

Inflection point description in the figure:

1. Sudden increase in response time

2. It means the limit reached by one or more resources of the system

3. Inflection points can usually be used for performance test analysis and positioning

2. Throughput

Definition: The number of client requests processed by the system per unit time

Calculation method: Throughput = (number of requests) / (total time)

Throughput-load correspondence:

① Ascending stage: The throughput increases with the increase of the load, and the throughput is proportional to the load;

②Stable stage: The throughput remains stable with the increase of load, without much change or fluctuation;

③Descent stage: The throughput decreases with the increase of load, and the throughput is inversely proportional to the load;

image.png

The larger the area of ​​a1, the stronger the performance capability of the system, the larger the area of ​​a2, the better the stability of the system, and the larger the area of ​​a3, the better the fault tolerance of the system

Throughput

Throughput/transmission time, that is, the amount of data transmitted on the network per unit time, can also refer to the number of client requests processed per unit time, which is an important indicator to measure network performance.

Usually, the throughput rate is measured by "bytes/second", of course, it can also be measured by "requests/second" and "pages/second";

3. Concurrent number

① Concurrency in a narrow sense: All users perform the same operation at the same time, generally referring to the same type of business scenarios, such as 1000 users logging in to the system at the same time;

② Concurrency in a broad sense: multiple users interact with the system, these business scenarios can be the same or different, and there are many cross-requests and processing;

4. Resource utilization

Resource indicators are directly related to hardware resource consumption, while system indicators are directly related to user scenarios and requirements:

image.png

Resource indicators:
CPU usage: refers to the percentage of CPU time consumed by user processes and system processes. For a long time, the generally acceptable upper limit does not exceed 85%;

Memory utilization: memory utilization = (1- free memory / total memory size) * 100%, generally at least 10% of available memory, the acceptable upper limit of memory utilization is 85%;

Disk I/O: Disks are mainly used to access data, so when it comes to IO operations, there are two corresponding operations. When storing data, it corresponds to writing IO operations, and when fetching data, it corresponds to It is a read IO operation. Generally, % Disk Time (the percentage of time the disk takes for read and write operations) is used to measure the disk read and write performance;

Network bandwidth: generally measured by the counter Bytes Total/sec, which is expressed as the rate of sending and receiving bytes, including frame characters; to determine whether the network connection speed is a bottleneck, you can use the value of this counter to compare with the current network bandwidth ;

System indicators:
number of concurrent users: the number of users interacting with the system per unit time;

Number of online users: the number of users accessing the system within a certain period of time, these users do not necessarily submit requests to the system at the same time;

Average response time: the average response time of the system processing transactions; the transaction response time is the time consumed from the client submitting the access request to the client receiving the server response;

Transaction success rate: In performance testing, defined transactions are used to measure the performance indicators of one or more business processes, such as user login, order saving, and order submission operations can all be defined as transactions. How many defined transactions can the system successfully complete per unit time? Transactions, to a certain extent, reflect the processing capabilities of the system, and are generally measured by the success rate of transactions;

Timeout error rate: mainly refers to the ratio of transaction failure due to timeout or other errors in the system to the total transaction;

Resource utilization-load correspondence:

image.png

Inflection point description in the figure:

1. The resource usage of a certain server is gradually reaching saturation

2. Inflection points can usually be used for performance test analysis and positioning

5. Other commonly used concepts:

TPS

Transaction Per Second: the number of transactions per second, which refers to the number of transactions that the server can process within a unit of time (seconds), generally in request/second;

QPS is a query, and TPS is a transaction, which is the entry point of a query, and also includes other types of business scenarios, so QPS should be a subset of TPS!

SWC

Query Per Second: query rate per second, which refers to the rate of query requests processed by the server in unit time (seconds);

Both TPS and QPS are important indicators to measure the processing capability of the system, and are generally combined with concurrency to judge the processing capability of the system;

Thinking Time

Thinking time, in the performance test, simulate the real operation scene of the user. There is a certain interval between transactions operated by users, and there is no pressure on the server during this time. This concept is introduced for concurrent testing (with cross-business scenarios), and the ratio of business scenarios is more in line with real business scenarios;

PV

Page View: The number of page views is usually an important indicator to measure the traffic of a page or even a website;

For subdivision, there are the number of unique visitors, the number of repeat visitors, the number of individual page visits, and the user's dwell time;

RT/ART

Response Time/average Response Time: Response time/average response time, refers to how long a transaction takes to complete;

Generally speaking, the average response time is more representative in performance testing. For subdivision, there are minimum and maximum response times, 50% and 90% user response times, etc.;

 jmeter performance test video tutorial: 2023 latest detailed explanation of the whole process of the jmeter performance test project of a large factory, quietly collected, you will not see it later_哔哩哔哩_bilibili icon-default.png?t=N6B9https://www.bilibili.com/video/BV1Vu411L77o/? spm_id_from=333.999.0.0

Performance testing process

demand analysis

System information that needs to be analyzed

image.png

Business information to be analyzed

image.png

performance needs assessment

Before implementing performance testing, we need to make a corresponding evaluation of the system under test. The main purpose is to clarify whether performance testing is required. If it is determined that a performance test is required, it is necessary to further establish performance test points and indicators, clarify what should be tested, what are the performance indicators, and the criteria for passing or failing the test? The performance indicators will also be evaluated according to the situation, and the system under test is required to meet the business pressure for a certain period of time in the future.
Business perspective:
Is the system internal or external to the company? How many people are using the system?
System perspective:
a) System architecture: b) Database requirements: c) Special system requirements:

Identify performance test points:

  • Key business:
    Determine whether the project under test is a key business, and what are the main business logic points, especially the transaction-related function points. Such as transfer, deduction and other interfaces. If the item (or function point) is not business-critical (or business-critical point)

  • Daily request amount:
    Determine the daily request amount of each function point of the project under test (the request amount under different time granularities can be counted, such as: hour, day, week, month). If the daily request volume is high, the system pressure is high, and it is a key business, the project needs to be tested for performance, and the key business points can be identified as performance points

  • Logical complexity:
    determine the logical complexity of each function point of the project under test. If the daily request volume of a main business is not high, but the logic is very complex, it also needs to pass the performance test. The reason is that in a distributed call, when a link responds slowly, it will affect other links, causing an avalanche effect.

  • Operation promotion activities:
    Determine the future pressure of the system under test according to the operation promotion plan. Preparing for rainy days, preventing problems before they happen, and reducing operational risks are the main goals of performance testing. The performance of the system under test can not only meet the current pressure, but also need to meet the pressure in a certain period of time in the future. Therefore, understanding the operation promotion plan in advance plays a great role in the formulation of performance points. For example, the operation plan requires data such as how many PVs and UVs the system can support every day, or how many visits it needs to support after a quarter. When a new item (or function point) falls within the scope of the key operation promotion plan, the item (or function point) also needs to be tested for performance.

Establish performance indicators

a. Select core business processes (importance/frequency)
b. Number of concurrent users
c. Transaction throughput requirements
d. Response time requirements
e. System occupancy resource requirements
f. Scalability requirements

Create a system load model

  • Business level:
    (a) Core business process throughput
    (b) Peak business distribution time

  • System load:
    (a) peak/normal scenario throughput
    (b) CPU/IO/MEM/NETWORK

  • Data sources:
    (a) server-side monitoring
    (b) database logs
    (c) user requests

Develop the implementation time and plan of the test plan

Preset the start and end time and end time of each sub-module of this performance test
Configuration of test environment: LAN, virtual machine, operating system, database, middleware
Participants: who is responsible for which tasks, test strategy.
Output: test plan, analysis results

Set up a test environment

Test machine environment

Execution machine environment: This is the execution machine used to generate load, and usually needs to run on a physical machine.
Load tools: JDK/Eclipse/LoadRuner or Jmeter or Galting, etc.
Monitoring tools: Prepare server resources, JVM, and database monitoring tools for performance testing for subsequent performance test analysis and tuning

server environment

System operating environment: This is usually our test environment, Linux system/database/application service/various monitoring tools.
The test environment of most companies will be lower than the production environment, and it is also necessary to consider whether different hardware configurations will be an important factor restricting system performance! Therefore, in the test environment, it is necessary to deploy multiple different test environments to check the performance of the application system on different hardware configurations. The configurations are roughly as follows: ①Database server ②Application server ③Load simulator
④Software
operating
environment
,
platform And analyze the test results of the system under different configurations, and get the optimal result (the most suitable configuration for the current system)

Test Scenario Design

Through communication with the business department and previous user operating habits, determine the user operating habit mode, as well as the number of users in different scenarios, the number of operations, determine the test indicators, and perform performance monitoring, etc.

Test case design and script development

Choose LoadRuner or Jmeter, I use Jmeter.

I use Jmeter tools for recording,
(PS: If you can directly write scripts, write as little as possible and record as little as possible, recording will sometimes interfere)

Modify the script, enhance the script, make the script more in line with business logic, and have stronger usability.
(1) Parameterize user input
(2) Associate data
(3) Add things
(4 Add checkpoints)

Debug script
(1) Vugen single playback
(2) Vugen multiple playback
(3) Controller single script multi-user
(4) Controller multi-script multi-user
(5) View playback log

Verify the script
(1) Verify by checkpoint
(2) Verify by viewing the background server log
(3) View the running background changes through the test system
(4) Use SQL statements to query/insert/update/modify to view the effect

Test data preparation

There are two ways to get data:

(1) Pull production data, try to keep the data consistent and the magnitude is sufficient
(2) Use scripts to automatically generate data or use test tools to generate data (such as: use JDBC to pre-embed data)

a) Load test data: How much data is needed for concurrent testing? Like login scenario?

b) DB data size: In order to meet the production scenario as much as possible, it is necessary to simulate a large amount of online data, so a certain amount of data must be inserted into the database in advance.

Performance test execution and management

Execute the test script

In the deployed test environment, execute the test scripts we have designed in order according to the business scenarios and numbers

Test result record

Depending on the tools used in the test, the results are recorded in different forms; display methods: line charts, statistical charts, tables, etc., most of the performance test tools now provide a relatively complete interface graphical test results, of course, for the server You can use some counters or third-party monitoring tools to record resource usage and other conditions. After executing the test, organize and analyze the results.

image.png

Performance test result analysis and tuning

System Performance Analysis of Test Environment

According to the test results recorded before, after calculation, compare with the predetermined performance indicators to determine whether the results we need have been achieved; if not, check the specific bottleneck point, and then according to the specific data of the bottleneck point,

Bottleneck location analysis
Throughput: 28th principle (ie: 80% of the business is completed in 20% of the time/normal distribution)
Response time: 2/5/10th principle
Memory, disk, IO, process, network analysis
hardware, operating system , middleware, and application bottlenecks
for specific analysis of specific situations

Performance tuning
time resources, human resources
hardware resources, scalability, impact

Analysis of the impact of hardware devices on system performance

Configure several different test environments, so you can analyze the hardware resource usage diagrams of different test environments to determine whether the bottleneck is in the database server, application server or other aspects, and then perform targeted optimization and other operations

Analysis of other influencing factors

There are many factors affecting system performance, which can be analyzed from the scene that the user can feel, where is relatively slow, where the speed is acceptable, here can be analyzed according to the 2\5\8 principle; as for other factors such as network bandwidth, operation actions,
storage A series of influencing factors such as pool, thread implementation, server processing mechanism, etc., specific analysis of specific issues,

Problems found in the test

During the performance test execution process, some functional deficiencies or existing defects may be found, as well as places that need to be optimized, and the test needs to be executed multiple times.

Test Reporting and Tracking

The performance test report is a milestone in the performance test. The report can show the final results of the performance test, show whether the system performance meets the requirements, and whether there are performance risks

The performance test report needs to clarify:
performance test objectives,
performance test environment,
performance test data construction rules,
performance test strategy,
performance test results,
performance test tuning instructions,
problems encountered in the performance test process and solutions, etc.

After the performance test engineer completes the performance test, he needs to record the test results and use them as the baseline standard for the next performance test, including performance test result data, performance test bottlenecks, and tuning solutions. At the same time, problems encountered in the testing process, including code bottlenecks, configuration item problems, data problems and communication problems, as well as solutions or solutions, need to be accumulated for knowledge.

 

Guess you like

Origin blog.csdn.net/MXB_1220/article/details/131794796