Automated testing ideas: first think about testing, then think about automation, don't put the cart before the horse

written in front

This article is translated from the article "Test Automation Snake Oil" by tester James Bach. It is a meaningful article that the author accidentally discovered when studying and researching exploratory testing. It answers our doubts about automated testing very well .

For example, whether automated testing can replace everything, and also provides us with highly feasible suggestions.

As the author said: Think about testing first, then automation, and don't put the cart before the horse.

case analysis

Let's look at a few cases first.

Case 1

One product is passed from DevOps to the next.

New developers discover that product design documents are outdated and the build process is broken. After a month of analysis, everyone declared their design poor and insisted on rewriting most of the code. After a few more months, the developer either resigns or gets reassigned, and the cycle repeats.

Case 2

A product is rushed through the development process without the developer fully understanding the problem it is supposed to solve.

Months after delivery, a review revealed that the cost of running and maintaining the system was higher than that of an automated process.

Case 4

Software was written to automate a set of business tasks, but the tasks varied so much that the project was far behind schedule and the output of the system was unreliable. Developers periodically quit projects to help with manual tasks, further deepening their software lag.

Case 5

A program consisting of hundreds of nearly independent functions was put into use after only basic testing, and just before delivery, a large part of the functions were disabled as part of debugging. It took almost a year before anyone noticed that these features were missing.

These are all my own experiences, but I bet they sound familiar. People often complain that most software projects fail, and it shouldn't be surprising - software may seem simple from the outside, but the devil is in the details, isn't it?

Seasoned software engineers know this and approach each new project with a wary eye and a skeptical mind.

Automated testing is also difficult, look at the five examples above, they are not from product development projects, on the contrary, each of them is the result of automated testing.

In my 9 years of managing test teams and working with test automation (note, at some of the hippest and wealthiest companies in the software industry), the most important insight I've gained is that testing a software project is like any other prone to failure.

In fact, in my experience they fail more often, mostly because most organizations don't take the same care with their testware as they do with the delivered product.

Strangely enough, almost all testing experts, practicing testers, test managers, and of course companies that sell testing tools, recommend test automation with overwhelming enthusiasm.

Well, maybe the word "strange" is not appropriate. After all, CASE tools were once all the rage, and testing tools are just another type of CASE.

From object-oriented to "no-programmer" programming, unrealistic hype is nothing new to our industry.

So perhaps the low quality of publicly available information and analysis on test automation is not surprising, but simply a sign of the immaturity of the field.

Maybe we're still in the stage of appreciating the cool idea of ​​test automation and not recognizing its pitfalls.

I prefer automation over other testing tasks. Most full-time testers, and probably all developers, dream of being able to press a giant green button that lets a lab full of loyal robots do the hard work of testing, freeing themselves up to do other things like play games . However, if we want to realize such a dream, we must proceed carefully.

This paper provides a critical analysis of "scripting and playback" for GUI application regression testing automation.

Anatomy of an automated test

Debunking the Case for Classic Automation

"Automated testing performs a series of actions without human intervention. This approach helps eliminate human error and provides faster results. Since most products require multiple tests, automated testing often brings significant benefits. Labor cost savings. Typically, a company will exceed the labor cost break-even point after running automated tests two or three times.”

This quote comes from a white paper on test automation published by a leading testing tool vendor, and similar statements can be found in the advertisements and documentation of most commercial regression testing tools.

At times, the documents are interspersed with impressive diagrams that boil down to this: Computers are faster, cheaper, and more reliable than humans, so choose automation.

This reasoning is based on many reckless assumptions, let's look at 8 of them:

Reckless Assumption 1: Testing Is a "Sequence of Actions"

A more useful approach is to think of testing as a series of interactions interspersed with evaluations, some of which are predictable and some of which can be specified in purely objective terms.

However, there are many other interactions that are complex, ambiguous, and unstable. While it is often useful to conceptualize the general sequence of actions that comprise a given test, if we try to reduce the test to a rote sequence of actions, the result will be a narrow and shallow test set.

Manual testing is a process that is easy to adapt to changes and can deal with complex situations. Humans are able to detect hundreds of problematic patterns and, at a glance, immediately distinguish them from harmless anomalies.

Humans may not even be aware that they are making an assessment, but in a "sequence of action," each assessment must be explicitly planned. Testing may seem like just a set of actions, but good testing is an interactive cognitive process. That's why automation is best applied to only a small portion of the tests, rather than the majority of the testing process.

If you intend to automate all necessary test execution, you may spend a lot of money and time creating relatively weak tests that miss many interesting bugs and find many "problems" that end up being nothing more than Unexpected correct behavior.

Reckless Assumption 2: Testing means repeating the same actions over and over again

Once a particular test case is executed once and finds no bugs, the chances of this test case finding bugs are slim unless a new bug is introduced into the system.

However, if there is variation in the test cases, as is often the case when testing is performed manually, then there is a greater chance that new problems and old problems will be exposed, variability is the difference between manual testing versus scripting and playback testing A big plus.

When I was at Borland, the spreadsheet group used to track whether bugs were found through automated or manual testing - consistently. Over 80% of bugs are found manually despite years of investment in automation.

Their theory is that the more variables that are manually tested, the easier it is to find bugs in specific areas of change for new features.

Highly repeatable testing can actually minimize the chance of finding all important problems, and similarly, stepping in other people's footprints can also minimize the chance of stepping on the pit.

Reckless Assumption 3: We Can Do Automated Testing

Some tasks that are easy for humans are hard for computers, and perhaps the hardest part of automating is interpreting test results. It is very difficult for GUI software to automatically notice all classes of major problems while ignoring insignificant ones.

In a typical innovative software project, high levels of uncertainty and change exacerbate the problems of automation. In market-driven software projects, incremental development methods are often used, which almost guarantees that the product will change fundamentally.

Coupled with the fact that there are often no complete and accurate product specifications, automation development is a bit like driving through an unsigned forest: it can be done, but it must be done slowly, and there is a risk of backtracking or getting stuck.

Even if we have a specific sequence of operations that can in principle be automated, we can only do so if we have the right tools.

However, information about tools is hard to come by, and the most critical points of a regression testing tool are impossible to evaluate unless we use the tool to create or review industrial-scale test suites.

Here are some factors to consider when choosing a testing tool, be aware how many of these can never be assessed by reading a user manual or watching a trade show demo:

Learnability: Can the tool be mastered in a short amount of time, and are there training courses or books to aid in this process?

Performance clarification: Is the tool fast enough to save significant test development and execution time compared to manual testing?

Non-intrusive: How well does the tool simulate real users, and is the software under test the same with and without automation?

Reckless Assumption 4: Automated testing is faster because it requires no human intervention

All automated test suites require human intervention, even just to diagnose results and fix problematic tests, and getting a complex test suite to run smoothly can be surprisingly difficult.

Common culprits are changes in the software under test, memory issues, file system issues, network failures, and bugs in the testing tool itself.

Reckless Assumption 5: Automation reduces human error

It does reduce some errors, like the ones humans make when asked to perform a long list of tests.

But other errors are magnified, and any bugs that went unnoticed when generating the main comparison file disappear.

It is systematically ignored every time the suite is executed, or one oversight during debugging can accidentally invalidate hundreds of tests.

The dBase team at Borland once found that about 3000 tests in their suite were hardcoded to report success regardless of what actually went wrong in the product. To avoid these problems, automation should be tested or reviewed on a regular basis.

On the other hand, using basic test management documents, reports, and practices, corresponding failures are easier to spot.

Reckless Assumption 6: We Can Quantify the Costs and Benefits of Manual and Automated Testing

The truth is that manual testing and automated testing are actually two different processes rather than two different ways of performing the same process. Their dynamics are different, as are the bugs they tend to reveal.

So it doesn't make sense to compare them directly in terms of cost or number of bugs found.

Furthermore, the best method of evaluation is in the context of a series of real software projects. This is why I recommend test automation as part of the multifaceted pursuit of a good testing strategy, rather than as an activity that dominates the process, or is independent of it.

Reckless Assumption 7: Automation will bring "significant labor cost savings"

"Typically, after two or three automated tests, a company will exceed the break-even point of labor costs." This estimate may come from field data, or it may come from the thinking of marketing experts, in any case, it is nonsense .

The cost of automated testing consists of several components: cost of elaborating automation development - cost of operating automated testing - cost of maintaining automation as the product changes - cost of other necessary new tasks.

This has to be weighed against the cost of any remaining manual testing, which can be quite a lot. In fact, I've never experienced automation that reduces the need for manual testing to such an extent that the manual testers end up having to do less work.

How these costs are calculated depends on many factors, including the technology being tested , the testing tools used, the skills of the test developers, and the quality of the test suite.

Writing a test script doesn't have to be a lot of effort, but building a proper test harness can take weeks or months. The same goes for deciding which tools to buy, which tests to automate, how to track automation into the rest of the testing process, and of course the process of learning how to use the tools and then actually writing the test programs.

Figuring out a comprehensive approach to the process (i.e. one that produces a useful product) typically takes several months of full-time work, and longer if the automation developer is inexperienced with the issues of test automation or the details of the tools and techniques time.

What about ongoing maintenance costs? Most cost analyzes of automated testing completely ignore the special new tasks that must be done because of automation, such as:

Test cases must be carefully documented ;

Automation itself must be tested;

Every time the suite is executed, someone has to go through the results to distinguish between false negatives and real bugs;

Fundamental changes in the product under test must be reviewed to assess their impact on the test suite, and new test code may have to be written to account for them;

If the tested product is subsequently ported to a new platform, or even a new version of the same platform, porting testing must be done.

These new tasks had a significant impact on the day-to-day lives of testers, and most GUI software testing teams I've worked on tried to have all testers do part-time automation, but every team eventually abandoned the idea in favor of A dedicated automation engineer or team.

Writing test code and performing interactive hand testing are such different activities that a person assigned these two responsibilities will tend to focus on one and ignore the other.

Moreover, since automation development is software development , it requires a certain amount of development talents, and some testers cannot do it. Regardless, companies that are serious about automation often end up with full-time employees doing it, and that has to be factored into the cost of the overall strategy.

Reckless Assumption 8: Automation Doesn't Hurt Test Projects

That leaves us with the thorniest of all problems we face in the pursuit of an automation strategy: automating what we don't understand is dangerous.

If we don't figure out the testing strategy before introducing automation, the result of test automation will be a lot of test code that no one can understand at all.

As the original developer of the suite drifts into other tasks and others take over maintenance, the suite acquires some sort of belonging within the testing team. Maintainers are afraid to throw away any old tests, even if they seem pointless, because they might prove to be important later.

So the suite continues to add new tests, becoming an increasingly mysterious oracle, like some ancient Himalayan guru or the talking oak tree in a Disney movie. No one knows what the suite actually tests, or what it means for a product to "pass the test suite", and the larger the scale, the less likely it is that someone would bother to look for it.

This has happened to me personally (more than once, before I learned my lesson), and I've seen and heard it happen to many other test managers.

Most people don't even realize it's a problem until one day a development manager asks what the test suite covers and what it doesn't, and no one can give an answer.

Or one day, when it's needed most, the entire test system crashes, and there's no manual process to back it up. The irony of this situation is that an honest attempt to test more professionally might end up ensuring it was done blindly and ignorantly.

Manual testing strategies can also suffer from confusion, but when tests are dynamically created from a relatively small set of principles or documents, it is much easier to review and adjust the strategy. Yes, manual testing is slower, but it is more flexible and it can deal with the chaos of incomplete and changing products and specifications.

A smart approach to automation

Despite the concerns this article raises, I do believe in test automation, I'm a test automation consultant after all.

Just as you can have high-quality software, you can also have high-quality automated testing. However, to create good test automation we have to be careful, the road is full of pitfalls. Here are some key principles to keep in mind:

Carefully distinguish between automation and the process it automates. The testing process should be in a form that is easily reviewed and mapped onto the automated process, the suite will be used alongside human testing, not as a replacement for human testing.

Choose testing tools carefully. Gather experiences from other testers and organizations to try evaluation versions of candidate tools before purchasing.

Think carefully about buying or building a test management tool, a good test management system can really help make the suite more viewable and maintainable.

Make sure that every execution of the test suite produces a status report that includes which tests passed, which ones failed, and the actual bugs found. The report should also detail any work done to maintain or enhance the suite, I've found these reports to be an indispensable source material for analyzing the cost-benefit of automation.

Make sure the product is mature enough that the maintenance cost of constantly changing tests doesn't outweigh any benefits provided.

One day a few years ago, a power outage occurred during a violent nighttime storm that affected the test suite our team had created. When we arrived at work the next morning, we found that our kit had automatically restarted itself, resetting the network, picking up where we left off, and completing our tests.

A lot of work has gone into making our kits as bulletproof as they are, and we're glad they did. The thing is, we later found out when reviewing the test scripts in the suite that, out of about 450 tests, only about 18 were actually useful.

It's a long story, but it turns out that we have an extremely reliable test suite that finds any significant bugs in the software we're testing.

I've told this story to other unimpressed test managers who don't think this kind of thing can happen to them, but it can happen if the testing machine distracts you from the mechanics of testing.

Automation is a great idea, but the secret to making it a good investment is to think about testing first, and automation second. If testing is a means to understand software quality, then automation is only a means within a means. You won't know this from the ads, but it's just one of many strategies that support effective software testing .

Finally: The following complete software testing video learning tutorial has been sorted out and uploaded. Friends can get it for free if they need it [Guaranteed 100% free]

insert image description here

Software Testing Interview Documentation

We must study to find a high-paying job. The following interview questions are the latest interview materials from first-tier Internet companies such as Ali, Tencent, and Byte, and some Byte bosses have given authoritative answers. Finish this set The interview materials believe that everyone can find a satisfactory job.

picture

Guess you like

Origin blog.csdn.net/weixin_50829653/article/details/130648969