How to do a good test? (5) Performance Testing (PT)

1. Introduction to performance testing:

Performance Testing (PT) is a software testing method used to evaluate the performance and responsiveness of the system under different load conditions. It mainly focuses on aspects such as system throughput, response time, resource utilization, and stability to determine whether the system meets performance requirements and identify performance bottlenecks and improvement opportunities.

2. Usage scenarios:

Performance testing is suitable for the following situations:

  • For the web and mobile sides of the online shopping system, performance testing can help evaluate the system's performance under concurrent user access, high load, and peak times.
  • It can identify system bottlenecks and bottleneck causes, such as database performance, network latency, server resources, etc.
  • Performance testing helps determine the scalability and stability of the system to ensure that the system can maintain good performance under actual usage scenarios.

3. Common technologies and tools:

When performing performance testing of online shopping systems, you can use the following common technologies and tools:

  • Load testing tools: such as Apache JMeter, LoadRunner, etc., used to simulate concurrent user access and generate load.
  • Performance monitoring tools: such as New Relic, AppDynamics, etc., used to monitor system performance indicators and resource utilization.
  • Stress testing tools: such as Gatling, Tsung, etc., used to simulate performance testing under high load and peak conditions.
  • Performance analysis tools: such as VisualVM, Grafana, etc., used to analyze system performance bottlenecks and optimize suggestions.

4. Common performance indicator definitions:

  • Throughput: The number of transactions or requests processed by the system in unit time.
  • Response Time: The time it takes for the system to return a response after a user initiates a request.
  • Concurrent Users: The number of users accessing the system at the same time.
  • Error Rate: The percentage of erroneous requests or transactions that occur within a certain period of time.
  • Resource Utilization: The utilization of computing resources (such as CPU, memory, disk) used by the system.
  • Load Balancing: The ability to distribute load evenly across multiple servers.

5. Specific implementation methods:

Here is how performance testing is generally implemented:

  • Requirements analysis: Determine the goals, performance requirements and business scenarios of performance testing.
  • Test plan: Develop a performance test plan, including test scope, test scenarios, test tools, etc.
  • Scenario design: Design representative performance test scenarios based on business scenarios and user behavior.
  • Test environment preparation: Configure the appropriate test environment, including servers, networks, databases, etc.
  • Performance test execution: Use performance testing tools to execute the designed performance test scenarios, record performance indicators and monitor system behavior.
  • Performance analysis: Analyze performance test results to identify performance bottlenecks and optimization opportunities.
  • Performance optimization: Carry out system tuning and performance optimization according to the discovered performance problems.
  • Performance report: Generate performance test reports, including test results, performance indicators, optimization suggestions, etc.

6. Design specific test cases for common performance indicators and provide examples:

The following are sample performance test cases designed for common performance metrics:

6.1.Throughput test cases:

Test case name: Throughput test
Test goal: Evaluate the number of transactions processed by the system in unit time.
Test prerequisites: The system is deployed and ready for performance testing.
Test steps:

  1. Use load testing tools to simulate a specified number of concurrent users accessing the system.
  2. Record the number of transactions processed by the system under different load conditions.
  3. Calculate the throughput of a system, which is the number of transactions processed per second.
    Expected results: The system's throughput should meet performance requirements and meet business requirements.
    Actual results: The throughput of the system is 50 transactions per second.
    Conclusion: The throughput of the system meets the performance requirements and can meet business requirements.

6.2. Response Time test case:

Test case name: Response time test
Test goal: Evaluate the time it takes for the system to return a response.
Test prerequisites: The system is deployed and ready for performance testing.
Test steps:

  1. Use load testing tools to simulate concurrent users making requests to the system.
  2. Record the initiation time of each request and the time the system returns a response.
  3. Calculate the average response time of the system.
    Expected results: The system's response time should meet the performance requirements and remain within an acceptable range.
    Actual results: The average response time of the system is 500 milliseconds.
    Conclusion: The response time of the system meets the performance requirements and remains within an acceptable range.

6.3. Concurrent Users test case:

Test case name: Number of concurrent users test
Test goal: Evaluate the number of concurrent users that the system can handle at the same time.
Test prerequisites: The system is deployed and ready for performance testing.
Test steps:

  1. Gradually increase the number of concurrent users and use load testing tools to simulate users accessing the system.
  2. Record the performance indicators of the system under different concurrent user conditions, such as response time, throughput, etc.
  3. When the system performance bottleneck or load limit is reached, determine the maximum number of concurrent users that the system can handle.
    Expected results: The number of concurrent users that the system can handle should meet performance requirements and meet business requirements.
    Actual results: The maximum number of concurrent users the system can handle is 200.
    Conclusion: The number of concurrent users that the system can handle meets the performance requirements and meets business requirements.

6.4. Error Rate test case:

Test case name: Error rate test
Test goal: Evaluate the percentage of incorrect requests or transactions that occur in the system within a certain period of time.
Test prerequisites: The system is deployed and ready for performance testing.
Test steps:

  1. Use load testing tools to simulate user access to the system, including normal requests and failed requests.
  2. Record the number of erroneous requests or transactions in the system within a certain period of time.
  3. Calculate the system's error rate, which is the percentage of incorrect requests or transactions.
    Expected results: The error rate of the system should meet the performance requirements and remain within an acceptable range.
    Actual results: The system has an error rate of 10%.
    Conclusion: The error rate of the system meets the performance requirements and remains within an acceptable range.

6.5. Resource Utilization test case:

Test case name: Resource utilization test
Test goal: Evaluate the resource utilization used by the system under load conditions.
Test prerequisites: The system is deployed and ready for performance testing.
Test steps:

  1. Monitor system resource usage, including CPU utilization, memory utilization, disk utilization, etc.
  2. Use load testing tools to simulate user access to the system and record the system's resource utilization under different load conditions.
  3. Analyze whether the system's resource utilization is within an acceptable range.
    Expected results: The system's resource utilization should meet performance requirements and remain within an acceptable range.
    Actual results: The system's CPU utilization is 80%, memory utilization is 60%, and disk utilization is 40%.
    Conclusion: The resource utilization of the system meets the performance requirements and remains within an acceptable range.

6.6. Load Balancing test case:

Test case name: Load balancing test
Test goal: Evaluate whether the load balancer can distribute requests to backend servers evenly under different load conditions.
Test prerequisites: The load balancer is deployed and ready for performance testing.
Test steps:

  1. Use load testing tools to simulate concurrent users making requests to the system.
  2. Monitor the load balancer's distribution of requests to backend servers, recording the number of requests handled by each server.
  3. Analyze the load of each server and evaluate the performance of the load balancer.
    Expected results: The load balancer should be able to distribute requests to the backend servers evenly, maintaining load balance among the servers.
    Actual results: The load balancer successfully distributes requests to the backend servers evenly, and the load of each server is basically balanced.
    Conclusion: The load balancer performs as expected and is able to achieve load balancing.

Guess you like

Origin blog.csdn.net/holyvslin/article/details/133310547