Is performance testing difficult? This article takes you to learn the core process and concepts of performance testing

In the concept of many testers, performance testing is to use Loadrunner, Jmeter and other tools to perform stress testing, and then get the test results, but think carefully, who is the test for? What is the purpose of the test? What are the monitoring indicators? How to analyze the results obtained (what kind of results are considered to be passed) and so on.

Therefore, the use of tool pressure testing is only the most basic step of performance testing. Let's take a look at the general process of performance testing:

1. Business learning: understand system functions by viewing requirements documents, PRD and other related documents + manual operation; 

2. Analyze requirements: analyze the non-functional requirements of the system, delineate the scope of performance testing, and understand system performance indicators; 

3. Work evaluation: workload decomposition, evaluation of workload, and planning of resource investment (how many hardware resources, manpower, and time are required to complete the testing work). 

4. Design model: It can be understood as a test scenario, whether it is a single test scenario or a mixed test scenario; 

5. Write a plan: The test plan should clearly list the test scope, manpower input, duration, work content, risk assessment, risk response strategy, etc.; 

6. Prepare the test environment: prepare the server (deploy the system under test), load machine (install the pressure testing tool, the machine that generates the load)

7. Prepare test data: prepare data according to the test scenario (design model)

There are two reasons:

a) Some data is the basis for supporting the operation of the system (for example, if we want to perform pressure testing on login, we need to prepare some registered accounts first);

b) Different magnitudes of data affect performance results (for example, the query results from databases of different magnitudes, the time must be different), as for the magnitude of data to be prepared, it needs to be designed according to the actual situation of the project;

8. Development scripts: develop test scripts (recorded or manually written) according to test scenarios and test cases; 

9. Test execution: run the test; 

10. Defect management: track the defects found in the testing process; 

11. Performance analysis: analyze the performance test results to see if the expected goals are met, and if not, find out the reason; 

12. Performance tuning: According to the analysis in the previous step, try to optimize the system; 

13. Test report: summarize the test work, report test results, problems found, etc. 

14. Review: Review the contents of the performance report, confirm the problem, and evaluate the risk of going online. Although sometimes the performance test results are not ideal, it will be launched based on time and cost considerations, and it will be quickly iterated later.

  

Performance test deliverables:

  • Test Plan

  • test script

  • test program

  • testing report

1. The success factors of performance testing

Performance testing is difficult to get started. It is a discipline that integrates comprehensive skills such as testing, development, operation and maintenance, demand research, architecture, coordination and management. Mastering a testing tool is only the most basic step.

There are several difficulties in performance testing:

  • demand analysis

  • Scene design

  • Performance Diagnosis and Tuning

  • Environment construction and simulation

 

2. Terminology commonly used in performance testing

(1) Load: Simulate the process of user operations putting pressure on the server, such as simulating 100 users logging in at the same time; 

(2) Performance Test : Under the specified load conditions, whether the system performance indicators (response time, throughput, etc.) meet the requirements; 

(3) Loading Test :

Under the premise of a certain hardware environment, the maximum number of concurrent users that can be supported under the condition of meeting the performance index is determined by continuously increasing the load (different number of virtual users). To put it simply, it is to help us quantify the volume of the system, find out the inflection point of system performance, and give suggestions for generating environment planning.

The performance indicators mentioned here include:

  • TPS (transactions per second)

  • RT (average transaction response time)

  • CPU Using (CPU utilization)

  • Memory Using (memory utilization), etc.

 

(4) Stress/strength test:

In a certain software and hardware environment, the server resources (emphasis on server resources, hardware resources) are in the limit state by means of high load. The test system runs very stable for a long time under the limit state. The indicators to determine whether it is stable include TPS, RT , CPU utilization, memory utilization, etc.;

(5) Stability test:

In a certain software and hardware environment, run a certain load for a long time to determine whether the system runs stably under the premise of meeting the performance indicators. The difference from the above pressure/strength test is that the load does not emphasize that in the limit state, generally 1.5~2 times the target load is used for testing; 

(6) TPS: Number of completed transactions per second

A transaction refers to a collection of operations. Transactions contain different operations in different scenarios. We will introduce this concept later with examples.

(7) RT: response time

It refers to how long a transaction takes to complete. In order to make this value more representative, the average value will be calculated, that is, ART, but generally speaking, RT refers to the average response time.

(8) PV (Page View): the number of times a user visits a page per second

This parameter can help us analyze how many users visit the page per second on average

(9) Vuser (Virtual User): virtual user

Used to imitate real users to operate;

(10) Concurrency (concurrency):

  • Concurrency in a narrow sense: Virtual users do the same thing or operation at the same time. This kind of operation generally targets the same type of business, or all users perform exactly the same operation. The purpose is to test the processing of concurrent operations by databases and programs.
  • Generalized concurrency: Virtual users operate the system, but the operations can be different; narrow concurrency is mostly applicable to a single test scenario, and generalized concurrency is mostly suitable for mixed test scenarios and stability test scenarios;

(11) Scenario: The process of simulating a certain operation of a real user can be called a scenario. Let us give an example of a forum system:

  • Single scene: user login, this login action alone is a scene;
  • Mixed scene: The scene where a user posts a post may include the following actions, first log in, open the posting page, enter text, select a board, and post a post. These actions constitute a posting mixed scene;
     

 

(12) Thinking time (Think Time): Because when the user is operating, each operation has a time interval, corresponding to the script, it is the time interval between two request scripts

3. Performance test passing standard

 

Finally, I would like to thank everyone who has read my article carefully. Reciprocity is always necessary. Although it is not a very valuable thing, you can take it away if you need it:

These materials should be the most comprehensive and complete preparation warehouse for [software testing] friends. This warehouse has also accompanied tens of thousands of test engineers through the most difficult journey, and I hope it can help you! Partners can click the small card below to receive 

Guess you like

Origin blog.csdn.net/okcross0/article/details/130174386
Recommended