How to Effectively Conduct Performance Testing

1 Introduction With
  the development of Internet and e-commerce technology, people can complete online shopping, real-time communication, information retrieval and other operations without leaving home. Most of these systems are B/S architectures. For the system itself, its performance directly determines the number of online users that can be accommodated and user experience satisfaction, and the increase in the number of users means the increase in revenue such as advertising, so performance testing plays a very critical role in the B/S system. role, especially public-facing Internet systems.
2 What is performance testing?
Performance testing is to test various performance indicators of the system by simulating a variety of normal, peak and abnormal load conditions through automated testing tools, including load testing, strength testing, batch testing and other types. In the process of performance testing, many potential problems of the system will be found. These problems are often related to the traffic of a certain scale, so they cannot be found through simple manual testing. With the help of test tools or scripts written by yourself, the target system can be simulated in actual scenarios to perform comprehensive performance tests, which can expose problems before going online and reduce maintenance costs in the later period.
3 Performance testing stages are
divided The whole process of performance testing can be roughly divided into test planning, test execution and result analysis. This paper introduces a test model for example explanation, and the relevant information is shown in Table 1 below:
Table 1 Test Model
Model System Name Online Shopping System
Model System Architecture B/S System Based on MVC Three-tier Architecture
Model System Functions Product Browsing: Users can enter the website at will Browse products.
Order submission: After a registered user logs in, places an order to buy goods, and the system returns whether it is successful or not.
Background processing: The database automatically executes the database script at 11:00 pm every night to clear the transaction data of the day.

4 Performance test planning
Test planning is the most complex and valuable part of the entire performance test. Test planning includes: confirming test objectives, sorting out business processes, formulating quantitative indicators, formulating test cases and scenarios, preparing test resources, and arranging test plans.
4.1 Confirm the test target
For different systems under test, it is necessary to clarify the test target first. For example, it is set to "check the concurrent processing capability of each business function of the current system". Due to the different responsibilities of the system participants, the target positioning of the performance test is also different, which needs to be determined based on the actual situation. In the test model of this paper, it is assumed that there are two roles of product manager and technical manager. Their performance test objectives are briefly summarized as described in Table 2, and the test objectives can be confirmed by combining the two.
Table 2 Test Objectives
Responsibilities Test Objectives
The product manager verifies that the system can support the maximum number of user visits, the best user visits, the maximum number of transactions per second, and whether it can meet the expected business volume of 7 * 24 hours of operation.
The technical manager checks where the system performance bottlenecks are, whether there are memory leaks, and whether the resource utilization of the middleware and database is reasonable.
Generally speaking, performance testing is used as an acceptance link before going live. The system functions at this stage have basically been developed, and the test objective is mainly a performance test of the system as a whole. At this point, it is found that the core components need to be modified, and the cost of adjustment is very expensive. We can introduce performance testing in the early stage of project construction, test each business module in the development process, and further refine the testing objectives of each stage, as shown in the following figure:
Figure 1. Performance testing entry point

From Figure 1, it can be seen that Out, the system itself has many test entry points. When the user interface layer is not stable, you can start from the business logic layer to check the performance of the system. If the system is regarded as a building, each floor from the bottom to the top is a component, and when the component itself is strong, the whole house is strong.
4.2 Sort out the business process After the
test target is confirmed, it is necessary to sort out the business process according to this goal. For systems with complex functions, the participation of business and developers is also required. The following aspects can be paid attention to:
1: Distinguish user operation flow and system processing flow. Both are business processes, but the system processing process is initiated in the background and invisible to users. For example, in the test model of this paper, product browsing belongs to the user operation process; database automatic execution of batch processing is the system processing process.
2: Simulate business operations from the user's point of view, covering all operation branches, including operation interruptions that are prone to occur.
Business process sorting is directly related to subsequent test cases and scenario design, both of which determine whether the performance test data can truly reflect the system status. When the performance test implementation team is not familiar with the business, the performance test project manager needs to arrange support.
4.3 Develop quantitative indicators
In the performance test report, the system performance status will be reflected as a bunch of test indicators and corresponding values. Different targets have different sets of indicators. For the test model in this article, the following simple indicators can be formulated (for more detailed indicators, please refer to the relevant documents).
Functional layer: average transaction response time, number of transactions completed per second, number of successful transactions, number of failed transactions.
Middleware: JVM memory usage, middleware queue, thread pool utilization.
Database: queue length, SQL that occupies the most resources, waiting time, shared pool memory usage.
Operating system: CPU average utilization, CPU queue, memory utilization, disk IO.
With the indicator, we also need to set its corresponding value range according to the test target. For example, according to the requirements of the product manager, in the case of concurrent access by one thousand people, the average transaction response time of the system does not exceed 5 seconds. You can also specify the numerical range of performance indicators such as CPU utilization and JVM memory utilization (Table 3). It should be noted that the indicator sets supported by different testing tools are different, and multiple testing tools can be used for collaborative collection.
Table 3 Value range of performance indicators Reasonable indicator values ​​of
indicator project test scenarios
Average CPU utilization of 1,000 concurrent users < 85%
The average JVM utilization rate of 1,000 concurrent users < 80%
quantified performance indicators can bring optimization goals to the system. When we say that the performance meets expectations, it means that the values ​​of all indicators are within the ideal range, then how to formulate the correct value As for the scope, this must be analyzed based on experience and historical data of the system. The former is a performance indicator that is analogous to systems of the same type, while the latter requires mining operation and maintenance data, including peak user access and the highest number of transactions per second.
4.4 Formulating test cases and scenarios
Performance test cases are to re-decompose the sorted business processes, describe them as testable function points, and convert them into test execution codes in combination with performance indicators. In the test model of this article, the user login use case is briefly described as follows (the preconditions of the use case, such as system configuration and deployment information are omitted):
Table 4 Test Case 1
Login Test Case
  1: The user opens the home page of the website, and the page should be displayed normally, more than 60 seconds to fail.
  2: The user enters the account and password, clicks the login button, and waits for the system to prompt success or failure. If the wait exceeds 60 seconds, the login fails.
In test case 1, the user interacts with the system twice (opening the URL and clicking the login button), and the waiting time of each interaction needs to be counted separately. Considering the user's actual operation, there will be a certain pause, we can add thinking time to the script to simulate (fixed or random waiting time). Don't underestimate this setting. In the case of a large number of users, the pressure on the system is completely different, and then remove this part of the thinking time when making statistics.
The corresponding scenario for the execution of the performance test case is used to simulate the actual operating status of the system. Comprehensive system testing is theoretically infeasible, so when designing test scenarios, the main positioning is the typical application scenarios of users. It can be roughly divided into two categories: function point test scenarios and complex business test scenarios. The goal of the former is mainly to test the concurrency capability of a certain function point of the system, while the latter is closer to the actual operation of the system. For the user login function of the test model, the design function point test scenario 1 is as follows:
Table 5 Test scenario 1
The number of concurrent users: a total of 300, the initial number is 100, and 10 users are added every 1 second.
How it works: Each concurrent user executes the login test case cyclically for 15 minutes.
Considering that business processes can be crossed, for example, database batch processing and user operations are mixed in the test model, we design a complex test scenario as shown in Table 6:
Table 6 Test scenario 2
Number of concurrent users: a total of 300, the initial number is 100, 10 users are added every 1 second.
Operation mode: The database starts batch clearing, and concurrently 200 users log in cyclically, and another 100 users browse products randomly.

4.5 Prepare test resources
Test resources include 4 aspects:
1: Hardware resources. The performance test environment should use the same hardware conditions as the production environment. Strictly speaking, if the hardware environment is inconsistent, the performance test report is not convincing.
2: Software resources. The performance test target system needs to deploy software that is consistent with production. After production on the system, a monitoring software is often added, but the monitoring software also has resource consumption, especially the B/S system. Frequent capture of JVM data will cause greater pressure.
3: Data resources. The amount of data has a great impact on performance. There are two cases to consider the test data. The first is to transform the already running system, and the data in the production environment can be backed up to the test environment. The other is the system that is launched for the first time. At this time, the business data is empty, and some test data needs to be created. As for the level of data volume, it can be predicted how much business data will be in two years. Performance testing needs to be forward-looking.
4: Human resources. Performance testing will find many problems, and the locating and solving of problems requires more professional personnel, including commercial software providers. During the testing process, maintaining close communication with the development team is the key to the smooth development of the project.
4.6 Arranging the Test Plan
When the test resources and executable code are ready, a test plan needs to be formulated and implemented in stages. A simple example is shown in Table 7.
Table 7 Test Plan
Test Item Description Test Type/Test Objective (Brief)
Benchmark Collection System Benchmark Performance Index Strength Test to obtain benchmark data data.
Develop, debug, develop, and fix bugs found in the performance test
. Test the performance test intensity of each business function point to obtain data such as the maximum concurrency value of the system.
Complex business test Performance test capacity test of complex business scenarios to obtain data such as the best user access value.
Develop, debug, develop, and fix bugs found in performance tests. The
long-term load test system runs for a long time under a certain load. Fatigue testing, finding memory leaks, etc.
The test plan in Table 7 is explained as follows:
1: The start and end time of the test project is omitted in Table 7, and the work of development and debugging is included. This is because during the implementation process, if you encounter performance problems, development will take time to fix, and performance testing may need to be suspended.
2: The function point test is performed first, and then the complex business test is performed after passing the test. This is because a single function point is relatively simple, the complexity of business logic is not high, and problems such as resource competition and data lock are not easily exposed.
3: The benchmark test is the performance comparison object for system upgrades in the future. For example, after hardware upgrades, whether the same test scenario will get better results, whether the introduction of new system technologies or version upgrades will have a positive or negative impact on performance, Both can be obtained by comparing with benchmarks.
4: Each test stage has corresponding test objectives and different test types, which need to be formulated according to the previous test plan.

  The following matters need to be paid attention to during the execution of the performance test:
  1: Pay attention to saving the data of the test running process as evidence of the test results.
  2: If there is a problem, report it as soon as possible, and the modification of the system may lead to the rework of the test.
  3: During the benchmark function point test, the test site needs to be cleaned up before subsequent tests, because the system may have a cache.
  4: Test each business scenario by priority.
6 Test result analysis
  After each test is executed, a test result will be obtained. Don't rush to complete the subsequent test tasks, you can briefly analyze the test results to see if the data is logical. For example, for the same test scenario, increasing the number of concurrent users (common in strength testing), only to find that the response time is shortened, which is not logical. When all the test tasks are completed, analyze the data and submit the test report, pay attention to the following aspects:
  1: Different test reports are issued for personnel with different roles. For technical personnel, there can be more performance data and analysis.
  2: Carry out some forward-looking forecasts, and analyze the bottleneck of system performance expansion by synthesizing the resource situation and indicator data of this test.
7 Summary
  Performance testing is not a one-shot deal. With the continuous upgrade of the system, performance testing needs to be paid attention to as a norm. Performance testing leaders also need to maintain business focus and adjust testing strategies in a timely manner.

Guess you like