Performance test process (6)-test environment construction

1. The difference between performance test environment and functional test environment

Then the performance test environment is different from the functional test environment. For some companies, in order to save resources, the test environment for functional testing, a server can run multiple systems, through technical means, the systems will not affect each other ( The company used to run multiple tomcats on one server).
Performance testing is to test the software and hardware environment of the entire system. If multiple systems are running in a certain environment, it is difficult to judge the resource occupancy of a certain environment.

2. Ensure the consistency of the test environment and the production environment

1. Hardware environment, including server environment and network environment

Such as the model of the server and whether the server is shared with other applications, whether it is in a cluster environment, whether load balancing is performed through BIGIP, the hardware configuration used by the customer, the switch model used, and the network transmission rate.

2. Software environment

Version consistency
includes the version of the operating system, database, middleware, and the version of the system under test.

Configuration consistency The configuration of the
system (operating system/database/middleware/tested system) parameters is consistent, and the configuration of these system parameters may have a huge impact on the system. Therefore, in addition to ensuring that the test environment is consistent with the software version used in the real environment, attention should also be paid to whether the configuration of its parameters is consistent.

3. Consistency of usage scenarios

The consistency of basic data
includes the amount of predicted business data and the distribution of data types. A very simple example. A system database has only 10 data and tens of millions of data in a database. When we perform performance tests on it, the performance indicators obtained may be very different.

In order to ensure that the test environment is more consistent each time, the usage of the disk and the fragmentation of the disk will more or less affect the performance.

Consistency of usage mode
Try to simulate the usage of users in real scenarios as much as possible. In fact, we are doing the requirement analysis in the early stage of performance testing. The main purpose is to simulate the usage of users more realistically.

Three, implementation strategy

The above mentioned the content of the test environment and the production environment to be consistent. In the actual test, considering the cost, in many cases, it is difficult for us to apply for sufficient and consistent resources. Therefore, it is difficult to build a test environment that is completely consistent with the production environment.

We generally use two strategies to build a performance test environment (the estimation method has errors)

1. Realize the simulation of low-end hardware to high-end hardware through modeling

Calculate the relationship between hardware performance and system processing capacity under different configurations through configuration testing, so as to derive the real configuration that meets the system performance. This simulation requires accurate modeling. The more sampling points of the model, the better the result. Accurate, so that the performance indicators under the low-end configuration are transformed into the final predicted performance indicators under the high-end configuration through this model.

For example: to build a low-end environment, you first need to perform a separate performance benchmark test on the CPU and memory of this environment, and perform performance tests in different configurations to get a list of benchmark information. Of course, in the process of performing this performance test , We have to determine that the hardware is the bottleneck of the system. If only one CUP is used, in the performance test process, its utilization rate is very low, but the performance data obtained is very low, which at least shows that the CUP is not the bottleneck of the system. In this case, the desired benchmark value cannot be obtained.

Insert picture description here
As shown in the figure above, in the case of one CPU, 100 users are running and the CPU usage is close to saturation (100%). In the case of increasing to two CPUs, it can run 190 users and the UPU utilization rate is close to saturation (100%). With this record, we can calculate how many CPUs are needed to run 800 users. If the CUP model and frequency you use in the actual application are not exactly the same, you can use the EVEREST tool to calculate the score of each CUP at this time and evaluate its performance. Memory can also be tested and deduced using this method. Here we need to experiment more and have an in-depth understanding of the performance of the hardware and the structure of the entire project in order to minimize errors.

2. Calculate by means of clusters

For larger systems, the processing capacity of a single server is limited, and clusters are usually used for load balancing to complete the processing of massive requests. Although the test environment of the overall cluster cannot be obtained, the performance test of a node on the cluster can be performed to obtain the processing capacity of the node, and then the performance loss of each additional node can be calculated. It is also possible to obtain a large Expected performance indicators under load balancing.

For example: First obtain specific performance indicators on a single server, each server can withstand 500 concurrent users, the average TPS is 60, and the response time is 2 seconds. Then, add a load balancing strategy and test the data loss under the load strategy again. After obtaining the data, add a load balancing server, test the performance indicators of each server under the two servers, and so on, you can get the following table:

Number of load servers Number of concurrent users Average TPS Response time
1 500 60 2
1 490 58 2
2 490 58 2
3 480 57 2

With the addition of load balancing servers, the average processing capacity of each server will gradually stabilize, so as to understand how many load balancing servers are needed under what circumstances.

Guess you like

Origin blog.csdn.net/Python_BT/article/details/108751379