Exploring stress testing (1) -- some basic definitions

Software stress testing is a fundamental quality assurance activity that is part of every major software testing effort. The basic idea of ​​software stress testing is simple: instead of running manual or automated tests under normal conditions, run the tests under conditions where the number of computers is low or system resources are scarce. Common resources to stress test software include internal memory, CPU availability, disk space, and network bandwidth. (From Baidu Encyclopedia)

In layman's terms,

Stress testing is to continuously pressurize the software, force it to run under extreme conditions , and observe how far it can run, so as to find performance defects . Or within a certain period of time , send the expected number of transaction requests to the system, test the efficiency of the system under different pressure conditions, and the pressure conditions that the system can withstand . Then do targeted testing and analysis to find the bottlenecks that affect system performance, evaluate the efficiency of the system in the actual use environment, evaluate system performance, and judge whether the application system needs to be optimized or structurally adjusted. And optimize system resources.

The load pressure of the software system refers to the traffic that the system bears under a certain specified software, hardware and network environment, such as the number of concurrent users, continuous running time, data volume, etc. Among them, the number of concurrent users is an important indicator of load pressure.

The concurrency performance test is a process of determining the concurrent performance of the system by gradually increasing the load of concurrent users until the system bottlenecks or unacceptable state, comprehensively analyzing transaction execution indicators, resource monitoring indicators, etc. Concurrent performance testing is an important part of load stress testing.

Commonly used software: LoadRunner, Apache JMeter, NeoLoad, WebLOAD, Loadster, Load impact, CloudTest, Loadstorm, Alibaba Cloud PTS, etc. This article introduces the use of Apache JMeter for stress testing.

Performance test reference parameters:

(1) Average response time ( Average) :

It refers to the time it takes for the user to complete the entire process from sending a request from the client to receiving the result returned by the server, including network transmission time and server processing time. From the user's point of view, the response time should be from the time when the client computer processes the user operation and sends a request to the time when the client program receives and displays the result returned by the server.

(2) Number of concurrent users:

It refers to the number of users who conduct session operations with the server at the same time within a certain period of time. The types of concurrent users include: the number of system users, the number of concurrent online users, and the number of concurrent business users.

(3) Throughput ( Throughput) :

It refers to the number of requests or pages processed by the system per unit time, which can directly reflect the carrying capacity of the software. Generally speaking, the throughput (TPS) is measured by the number of requests per second or the number of pages; from a business point of view, it can also be measured by the number of visitors per day or the number of businesses processed per hour.

TPS: The number of transactions (passed, failed, and stopped) processed by the system per second, which can be used to determine the time transaction load of the system at any given moment.

(4) Resource utilization:

Refers to the utilization rate of system resources (CPU, memory), usually measured by the ratio of the actual usage of resources to the total available resources, including network, operating system, database, etc.

The above four performance indicators can be mainly divided into two aspects: system resource utilization and system behavior (response time, throughput, etc.). There is a certain correlation between them, and together they reflect different aspects of performance. For example, response time, maximum number of concurrent users, throughput, and resource utilization can be used to measure the timeliness, scalability and capacity, processing power, and running status of software, respectively. The shorter the response time, the greater the number of concurrent bears, the greater the throughput, and the less resources occupied, the better the system performance, and vice versa.

Guess you like

Origin blog.csdn.net/weixin_44240224/article/details/129713520