Overall understanding of performance testing

Overall understanding of performance testing

Performance test classification (test classification)

Load testing: Reach the established performance threshold through gradual pressure increase. The threshold setting should be less than or equal to a certain value, such as CPU usage less than or equal to 80%.

Stress testing: By gradually increasing pressure, some resources of the system reach saturation or even failure. The simple and crude explanation is what conditions can crush the system.

Concurrency test: At the same time, multiple virtual users access the same module and the same function at the same time. The usual way to test is to set up a rendezvous point.

Capacity testing: usually refers to the database level. The goal is to obtain the optimal capacity of the database, also called capacity estimation. The specific test method is to observe the processing capabilities of the database under certain concurrent users and varying amounts of basic data, that is, to obtain various performance indicators of the database.

Reliability test: also called stability test or fatigue test. It refers to whether the system reaches stability after running for a long time under high pressure. If the CPU usage is above 80% and running 24 hours a day, is the system stable? (Easiest to find memory leaks)

Exception testing: also called failure testing. It refers to the testing of system architecture. For example, in a load balancing architecture, it is necessary to test the system's response to downtime, node failure, etc.

Performance testing workflow

  1. Requirements analysis: Be familiar with what the project mainly does, how users operate it, and what are the main processes.

  2. Performance index formulation: What kind of performance index meets the current user needs.

  3. Script development: writing code or using tools.

  4. Scenario setting: debug scripts and set scenarios based on (demand analysis) the main processes used by users.

  5. Monitoring deployment: Deploy monitoring tools to see the performance status of the entire server and database.

  6. Test execution: first conduct a benchmark test (can find logical processing problems in multi-user systems under multi-concurrency), and then formally execute the test

  7. Performance analysis: Perform performance analysis on monitoring results.

  8. Performance tuning: Discover performance problems and perform tuning.

  9. testing report.

Common system application layered architecture

Display layer: web, android, iOS, H5…

Logical control layer: API…

Data storage layer: mysql, mongodb, redis…

Performance testing must have a layered mentality and can be tested piece by piece. For example, if we test the data storage layer separately, take the developed code, strip out the SQL statement and develop it into a mix, and monitor the database. If there is a problem, that is, the database The problem itself.

Performance test indicator definition

Transaction : One or more requests initiated by the client (these requests form a complete operation ), to the client receiving a response returned from the server.

Example: bank transfer

Bank of China transfers the money to Agricultural Bank of China. Bank of China receives the deduction request and deducts the money. Agricultural Bank of China receives the collection request and returns to Bank of China saying that the money has been received. Bank of China updates the status.

This example contains multiple requests, which together form a transfer transaction. If it is interrupted in the middle, it is not a complete transaction.

TPS : The number of transactions the system can process per second.

Request response time : The time it takes for the entire process from the client initiating a request to the client receiving a response from the server (one request)

Transaction response time : A transaction may be composed of one or more requests. Transaction response time is mainly from the user's perspective, such as transfers.

Concurrency : There is no concurrency in the strict sense . There is always a time difference in concurrency. No matter the difference is 1 millisecond or 1 microsecond, there is always a time difference. So concurrency talks about a time range, such as within 1 second.

Concurrency is mainly divided into the following two scenarios (for example):

  • Scenario 1: Multiple users perform the same operation on the system. For example, during Double Eleven, everyone conducts flash sales on the same product.
  • Scenario 2: Multiple users perform different operations on the system. For example, during Double Eleven, people conduct flash sales on different products, or perform other different operations, such as product browsing.

Number of concurrent users : **Number of users initiating requests to the system within the same unit time (default 1s).

Throughput : The total amount of data transmitted on the network during a performance test. It can be calculated manually. For example, for an HTTP request, if you know the request header and body, you can make a rough estimate of the request message size, such as 1M, and the network bandwidth is 10M. Without considering other factors, you can calculate the maximum concurrency. 10 requests will be queued no matter how large the concurrency is.

Throughput rate : The amount of data transmitted on the network per unit time.

Throughput rate = throughput/transmission time , for example, if the compression is 10 minutes, the total throughput is 10M, the throughput rate = 10x1024/(10x60)=10kb/s

Click rate : The number of requests a user submits to the server per second. This is an indicator unique to web applications. It can be imagined as how many clicks a user makes on the page per second. However, it should be noted that after one mouse click, the client may initiate multiple requests to the server.

Resource usage : usage of different system resources, such as cpu, memory, io.

Requirements analysis for performance testing

Purpose of analysis

  1. Clarify test indicators: What indicators need to focus on during this test?
  2. Clear test scenarios: What scenarios need to be tested in this test (scenarios and main processes that users focus on)

How to conduct needs analysis for new systems?

  • Comparison with the same industry (for example: what performance indicators have been achieved by the same type of systems from competing products)
  • Business expectations (for example: what business volume is the goal to achieve in the next few months or in the first phase)

How to do needs analysis for old systems?

  • Compare past user behavior and user volume

Comparison of performance testing tools

Comparison of commonly used tools

  • Loadrunner
  • Jmeter
Contrast Dimensions Loadrunner Jmeter
magnitude Heavy light
Ease of use easy easy
Is it open source? no yes
language support C/java 1.5 java
Whether to charge yes no

Guess you like

Origin blog.csdn.net/u011090984/article/details/123204026