[Performance testing] 5-year test veterans, summarizing performance testing basics to indicators, advanced performance testing special...


foreword

Performance testing is to evaluate the ability of a system or application in terms of response speed, throughput, stability, and scalability under specific operating conditions.
This type of testing typically involves simulating a large number of simultaneous users accessing an application and stress testing it to determine whether the system will function properly under high load.

Performance testing must be done through tools. Performance testing cannot be done manually, because performance testing is to simulate a lot of people operating the system at the same time. If it is done manually, it will require a lot of people to operate together, and the cost will be huge.

Commonly used performance testing tools include JMeter, LoadRunner, Gatling, etc. These tools can simulate a large number of users and record and analyze system performance data. When performing performance testing, it is necessary to pay attention to the construction of the test environment, the preparation of test data, the design of test scenarios, and the analysis of test results.

From a narrow perspective, the testing of large amounts of data is actually part of performance testing.

Because when an interface returns 10 pieces of data and returns 1 million pieces of data, the time consumed is definitely different. At this time, we will pay attention to performance-related issues such as database query for these data, data returned by the server, network transmission, etc., so big data Quantitative testing is actually part of performance testing.

Performance

Concurrency: There are two concepts.
Narrow sense: refers to performing the same operation at the same point in time, such as a rendezvous point;
broad sense: initiate a request to the server at the same time.

Number of concurrent users:
In a broad sense: the number of users who initiate requests at the same time point (regardless of whether they are the same or different requests, they are considered concurrent).
Narrow sense: the number of users who initiate the same request at the same point in time.
Assembly point: assembly point only exists in the narrow sense of concurrency, which means that multiple people initiate the same request at the same time. Rendezvous points are generally used in scenarios like seckill that require a large number of requests to be generated in an instant.

The difference between the number of concurrent users and concurrency: a user can initiate multiple requests, for example, there are 10 users, and each user initiates 5 requests, which is equivalent to 50 concurrent requests.

Transaction: Refers to the process in which a client sends a request to a server, and then the server responds. A module or a request or a business can be considered as a transaction.

Response time (RT): The time from initiating a request to receiving a request response;
sending request network transmission time + server processing time + returning corresponding network transmission time.

Tps/Qps:
Tps (Transaction per Second): the number of transactions processed by the server per second, the main indicator to measure the processing capacity of the server
Qps (Queries per Second): query rate per second, the query here is not limited to query data, other related queries Operations are also counted, such as: query memory, cache, etc.
The difference between the two: a transaction may trigger multiple queries.

Throughput: The number of requests processed per unit time (transaction/s); this is a measure of how many transactions the network can pass per second. If there is no problem with the network, Tps and throughput will generally remain the same.

Throughput: the average rate of data passing per unit time (kb/s)

Click rate (Hit per Second): the number of hits per second

Resource utilization: This refers to the usage of server resources. The common ones are:
cpu utilization;
memory utilization;
disk I/O;
generally, the utilization of these resources should not exceed 80%

Difference Between Performance and Functional Testing

The purpose of performance testing is not to find bugs, but to find performance indicators.
Performance can be involved before the system interface comes out, because performance testing is mainly for the interface.
Performance testing takes longer than functional and automated testing.

How to do performance testing

If the project has not been performance tested at all before, the first thing to do is benchmarking. Find the current performance indicators of the project. For example, when the concurrency reaches 150, the system may experience an exception. Then the concurrency value of 150 is the maximum concurrency of the current version of the system, which can be recorded as a performance indicator.

After the system is optimized or the version is iterated, run the previous script again. Get the latest performance indicators again, and then compare the new indicators with the old indicators. If the indicator value drops, so there is a problem with the performance of the new version, it is necessary to find a way to analyze the performance and optimize it.

By analogy, each subsequent version iteration or performance test needs to compare and analyze the new indicators with the old indicators to determine whether the system performance has declined.

Classification of Performance Testing

Load test : gradually increase system load, test system performance changes, and finally determine the maximum load that the system can withstand. The key to load testing is to gradually increase, find the performance range, and then further narrow the range according to the range until the smallest range is found.

Don't blindly increase the number of concurrency in large numbers, and don't just come up with hundreds, thousands, or tens of thousands of concurrency. This kind of test can't get any valid data and will only waste time.

Stress test : The purpose of the stress test is to do stability. Under a relatively high performance pressure, it will continue to run for a relatively long time to see the system services and the utilization of various resources.

The general recommendation for stress testing is 7*24, set in multiples of 24 hours. Because such as the memory buffer, it may take a long time to run to check whether there is a problem. The concurrency of the stress test can be selected to be 80-90% of the maximum concurrency of the system.

But in actual work, few companies have the resources to meet such a long time of operation, and most of the cases are carried out after get off work or on weekends.

Reliability test : Under a given certain business pressure, continue to run for a period of time to check whether the system is stable. It is similar to the stress test, but the difference is that it does not need to run for a long time, so it is more common for the reliability test to be used to test the performance of seckill.

Capacity test : Under certain software and hardware conditions, when the database has different data levels, test the business with more read/write in the system, so as to obtain performance indicators under different data levels.

If the system has been running for a period of time or it can be predicted that there will be many users or data in the system in the future, capacity testing is required, because the database in the test environment may have only a few thousand pieces of test data, but there may be tens of thousands or tens of thousands in production. Tens of thousands of data, different data levels will have an impact on performance.

The following is the most complete software test engineer learning knowledge architecture system diagram in 2023 that I compiled

1. From entry to mastery of Python programming

Please add a picture description

2. Interface automation project actual combat

Please add a picture description

3. Actual Combat of Web Automation Project

Please add a picture description

4. Actual Combat of App Automation Project

Please add a picture description

5. Resume of first-tier manufacturers

Please add a picture description

6. Test and develop DevOps system

Please add a picture description

7. Commonly used automated testing tools

Please add a picture description

Eight, JMeter performance test

Please add a picture description

9. Summary (little surprise at the end)

The journey of life cannot be smooth, but every setback and failure is an opportunity for transformation and growth. Only by persisting in struggle can we create the future we want.

Struggle is a belief and an attitude. No matter what kind of trials and risks you encounter in life, as long as you go forward bravely and are not afraid of hardships, you will be able to constantly surpass yourself and pursue a better self.

Every successful person has experienced countless failures, and every step towards success requires more effort than others. But as long as we stick to our own direction and be proactive, we can realize the biggest dream in our hearts.

Guess you like

Origin blog.csdn.net/m0_70102063/article/details/130223716