A guide to performance testing: test types, performance testing steps, best practices, and more

Recently, in order to save costs, the company has carried out a wave of computer room migration and integrated some South American deployment architectures. There are some major adjustments such as going to Google Cloud and some going to Alibaba Cloud. In the computer room migration project, it is necessary to think about how to perform performance testing. This kind of large computer room migration SRE (operation and maintenance) will do some single-component performance tests for components, but after the entire migration, will the overall performance reach the same level? It was consistent before the migration, and it was not very clear. In view of this, as a QA, it is necessary to make an overall assessment of the performance quality before migration. Here are some guidelines on performance testing.

1 Overview

Performance testing is a form of software testing that focuses on the performance of a running system under a specific load. This has nothing to do with finding software bugs or defects. Different performance test types measure against benchmarks and standards. Performance testing provides developers with the diagnostic information they need to eliminate bottlenecks.

In this article, you'll learn about:

performance test type

Steps on How to Run a Performance Test

performance test index

and software testing best practices

2. Types of software performance testing

First, it is important to understand how the software behaves on the user's system. Different types of performance testing can be applied during software testing. This is a non-functional test designed to determine the readiness of the system. (Functional testing focuses on individual functions of the software.)

test type

load test

Load testing measures system performance as the workload increases. This workload may imply concurrent users or transactions. As the workload increases, the system is monitored to measure response time and the continued ability of the system. The workload is within the parameters of normal operating conditions.

pressure test

Unlike load testing, stress testing (also known as fatigue testing) is designed to measure system performance outside the parameters of normal operating conditions. The software provides more users or transactions that can be handled. The goal of stress testing is to measure the stability of the software. When does the software fail and how does the software recover from the failure?

Spike testing (pike testing)

A spike test is a stress test that evaluates software performance when the workload increases rapidly and repeatedly. The workload in a short period of time exceeds normal expectations.

Endurance testing

Durability testing (also known as soak testing) is an assessment of how software will perform under normal workloads over an extended period of time. The goal of endurance testing is to check for system issues such as memory leaks. (A memory leak occurs when the system is unable to free discarded memory. A memory leak can affect system performance or cause the system to fail.)

Scalability testing

Scalability testing is used to determine whether software is effectively handling increasing workloads. This can be determined by gradually increasing user load or data volume while monitoring system performance. Also, when resources such as CPU and memory change, the workload may remain at the same level.

Volume testing

Batch testing determines how efficiently the software performs on large volumes of predictive data. It is also known as flood testing because the test floods the system with data.

Most Common Issues Observed in Performance Testing

During software performance testing, developers are looking for performance symptoms and problems.

Speed ​​issues—such as slow responses and long load times—are often observed and addressed.

Other performance issues can be observed:

Bottleneck - A bottleneck occurs when the flow of data is interrupted or stopped because there is not enough capacity to handle the workload.

Poor scalability - If the software cannot handle the required number of concurrent tasks, results may be delayed, errors may increase, or other effects may occur:

disk usage

CPU usage

memory leak

operating system limitations

Poor network configuration

Software configuration issues—often set at an insufficient level to handle the workload.

Insufficient hardware resources - performance tests may show physical memory limitations or poor CPU performance.

3. Seven performance testing steps

Also known as a test bed, a test environment is an environment in which software, hardware, and networks are set up to perform performance tests. To use a test environment for performance testing, developers can use the following seven steps:

1. Determine the test environment.

By identifying available hardware, software, network configurations, and tools, test teams can design tests early and identify performance testing challenges. Performance testing environment options include:

Subset of production systems, with fewer low-spec servers

A subset of production systems with fewer servers of the same specification

Copy of production system

actual production system

2. Identify performance indicators.

In addition to identifying metrics such as response time, throughput, and constraints, determine success criteria for performance testing.

3. Plan and design performance tests.

Identify performance testing scenarios that account for user variability, test data, and target metrics. This will create one or two models.

4. Configure the test environment.

Prepare the elements of the test environment and the instrumentation needed to monitor resources.

5. Implement the test design.

Develop tests.

6. Execute the test.

In addition to running performance tests, monitor and capture the generated data.

7. Analyze, report, retest.

Analyze data and share results. Run the performance test again with the same parameters and different parameters.

Which performance test metrics to measure

Metrics are needed to understand the quality and effectiveness of performance testing. Improvements cannot be made unless measured. Two definitions that need explanation:

Measurements

- The data being collected, such as the number of seconds it takes to respond to a request.

measure

- Calculations that use metrics to define quality of results, such as average response time (total response time/request).

There are many ways to measure speed, scalability, and stability, but you can't expect every round of performance testing to use all of them. Among the metrics used in performance testing, the following metrics are commonly used:

Response time

The total time to send a request and get a response.

waiting time

Also known as average latency, it tells how long it takes to receive the first byte after sending a request.

average load time

From a user perspective, the average time it takes to deliver each request is the main indicator of quality.

peak response time

This is a measure of the maximum time required to complete the request. Peak response times that are significantly longer than average may indicate problematic anomalies.

error rate software testing

This calculation is the percentage of requests that resulted in errors compared to all requests. These errors usually occur when the load exceeds capacity.

concurrent users

This is the most common measure of load, how many active users there are at any one time. Also known as payload size.

requests per second

How many requests were processed.

transaction pass/fail

A measure of the total number of successful or unsuccessful requests.

throughput

Measured in kilobytes per second, Throughput shows the amount of bandwidth used during the test.

CPU utilization

The time it takes for the CPU to process the request.

memory utilization

How much memory is required to process the request.

4.  Performance Testing Best Practices

Perhaps the most important tip for performance testing is to test early, and test often. A single test won't tell developers everything they need to know. A successful performance test is a series of repeated and smaller tests:

Test early in the development process. Don't wait and rush performance testing at the end of the project.

Performance testing isn't just for finished projects. There is value in testing individual units or modules.

Run multiple performance tests to ensure consistent results and determine metric averages.

Applications often involve multiple systems, such as databases, servers, and services. Test units individually and together.

In addition to repeated testing, performance testing will be more successful by following a set of performance testing best practices:

Involve developers, IT staff, and testers in creating a performance testing environment.

Remember, real people will be using software that is being performance tested. Determine how the results will affect users, not just test environment servers.

Go beyond performance testing parameters. Develop models by planning a test environment that takes user activity into account as much as possible.

Baseline measurements provide a starting point for determining success or failure.

Performance testing is best done in a test environment that is as close to the production system as possible.

Isolate the performance testing environment from the environment used for quality assurance testing.

No performance testing tool can do everything it needs to. Limited resources may further limit options. Research suitable performance testing tools.

Keep your test environment as consistent as possible.

Calculating the average will provide actionable metrics. There is also value in tracking outliers. These extreme measurements may reveal possible failures.

When preparing a report that shares performance test results, consider the audience. Also, include any system and software changes in the report.

Five Common Performance Testing Mistakes

When performance testing, certain errors can lead to unreliable results:

Not enough time for testing.

No developers involved.

A QA system similar to the production system is not used.

Insufficiently tuned software.

There is no troubleshooting plan.

Performance Testing Fallacy

Performance testing errors can result from errors or failure to follow performance testing best practices. According to Sofia Palamarchuk, these beliefs can cost a lot of money and resources when developing software:

Performance testing is the final step in development.

As mentioned in the Performance Testing Best Practices section, anticipating and resolving performance issues should be an early part of software development. Implementing solutions early will cost less than major fixes at the end of software development.

More hardware can solve performance problems.

Adding processors, servers or memory just adds to the cost and doesn't solve anything. More efficient software will run better and avoid potential problems that can arise even when hardware is added or upgraded.

The test environment is close enough.

Performance testing in a test environment similar to production is a performance testing best practice for a reason. Variations between components can significantly affect system performance. It may not be possible to perform performance testing in an exact production environment, but try to match: Software Development Lifecycle

hardware components

Operating system and settings

Other applications used on the system

database

What is working now is fully working.

Be careful extrapolating results. Don't accept a small set of performance test results and assume that when elements change, they will be the same. Also, it works in the opposite direction. Do not infer minimum performance and requirements based on load testing. All assumptions should be verified by performance testing.

One performance test scenario is enough.

Not every performance problem can be detected in one performance testing scenario. But resources do limit the amount of testing that can happen. In the middle is a series of performance tests, targeting the riskiest scenarios with the greatest impact on performance. Furthermore, problems can arise outside of well-planned and well-designed performance tests. Monitoring the production environment can also detect performance issues.

Testing each part is equivalent to testing the whole system.

While it is important to isolate functionality for performance testing, individual component test results do not constitute a system-wide assessment. But it may not be feasible to test all functions of the system. Performance testing must be designed to be as complete as possible using available resources. But be aware of what isn't tested.

What works for them works for us.

If a given set of users does experience complexity or performance issues, don't consider it a performance test for all users. Use performance testing to ensure that platforms and configurations work as expected.

Software developers are experienced and don't need performance testing.

Inexperience is not the only reason behind performance problems. Even developers who have created free software in the past make mistakes. More variables come into play, especially when there are multiple concurrent users in the system.

The following is the supporting information. For friends who do [software testing], it should be the most comprehensive and complete preparation warehouse. This warehouse also accompanied me through the most difficult journey. I hope it can help you too!

Software testing interview applet

The software test question bank maxed out by millions of people! ! ! Who is who knows! ! ! The most comprehensive quiz mini program on the whole network, you can use your mobile phone to do the quizzes, on the subway or on the bus, roll it up!

The following interview question sections are covered:

1. Basic theory of software testing, 2. web, app, interface function testing, 3. network, 4. database, 5. linux

6. web, app, interface automation, 7. performance testing, 8. programming basics, 9. hr interview questions, 10. open test questions, 11. security testing, 12. computer basics

Information acquisition method:

Guess you like

Origin blog.csdn.net/IT_LanTian/article/details/131743849