A quick self-check before leaving get off work: “Am I really capable of automating this?”

What happens when a test fails?

If someone runs the test manually, then they pause and learn more. However, when an automated test fails, the rest may continue to run. You have no way of seeing test reports until the suite is complete, and the automation won't perform any additional actions in the event of a failure to try to figure out the problem. When all the remaining use cases are executed, the tester may click the execute button again to see whether the failed use cases for the second time will be executed successfully.

So, are automated test retries good or bad? This is actually quite a controversial topic.

What is the retry mechanism?

To avoid any confusion, let’s clarify what we mean by “automated test retries”.

Let's say I have 100 automated test cases. When I run these test cases, the framework executes each test individually and produces a pass or fail result for the test. At the end of the suite, the framework brings all the results together. In the best case, all tests pass: 100/100.

However, suppose one of the tests fails. On a failure, the test framework will catch any exceptions, perform any cleanup routines, log the failure, and safely move on to the next test case. At the end of the suite, the report will show 99/100 tests that passed one test and failed.

By default, most testing frameworks will run each test once. However, some testing frameworks have features for automatically re-running test case failures. The framework even enables testers to specify the number of times to retry. So let's say we configure 2 retries for our 100 test suites. When a test fails, the framework will execute the use case 2 more times before moving to the next use case.

picture

Retries can be abused

Let's say you are currently on an automation team that delivers 300 automated tests for a web application every night. As we all know, web testing is often unstable, with about a dozen different tests failing every night and taking a lot of time every morning. Debugging issues. But when you rerun these failed test cases, they almost always pass. So you now have your automated test setup re-run every night.

Is this strategy a good one? Hard to say! Because this is actually hiding the problem rather than exposing it. Testers should pay attention to the red, not the green pass information. Through the re-running mechanism, the 10 use cases that originally generated red warnings all passed without exception. You will also spend energy analyzing the failure of these 10 use cases at night. Possible reasons?

What if it was indeed under abnormal conditions that these use cases were really triggered to become popular, and then the execution was successful because the conditions returned to normal?

What if this abnormal condition always occurs intermittently? In this case, use case re-running hides possible bugs.

In fact, failed reruns are also common in manual testing and are not unique to automation. Testers like to find consistent, repeatable failures because they are easily explainable. Once a test issue occurs only intermittently, we have no way to explain it to development and the rest of the team. These problems may be caused by environmental factors, or the conditions reached are relatively harsh and therefore difficult to reproduce.

Normally, if the test problem cannot be easily reproduced, we will choose to remain silent and not continue to argue with the development. The same is true for the automated re-run mechanism. When a use case fails, we can choose to reproduce it. If it does not reappear the second time, we will default to it. There is no problem here because the problem has not been reproduced! !

picture

So, what is the correct way to open a failed rerun?

First of all, be sure to record all failed log information and reasons for failure. If necessary, no matter whether the re-run is successful or not, as long as the use case becomes popular once, it should arouse the vigilance of testers: there is a high possibility of something going wrong here.

Secondly, during the project iteration process, we may not necessarily have time to solve these intermittent problems. It is difficult to reproduce the problem, let alone analyze and communicate with developers.

The final answer is still priority. Consistency failures are an important issue for us to report, and they always report red, regardless of whether you do a rerun or not. When these consistency failures are resolved, consider yellow issues (reruns, warnings).

In any case, the rerun mechanism is a very good tool and means. It allows us to know which problems are consistent and which are intermittent. The point is that as testers, we should be alert to all red and yellow, rather than Ideas try to turn red into green.

picture

Finally: The complete software testing video tutorial below has been compiled and uploaded. Friends who need it can get it by themselves [guaranteed 100% free]

Software Testing Interview Document

We must study to find a high-paying job. The following interview questions are the latest interview materials from first-tier Internet companies such as Alibaba, Tencent, Byte, etc., and some Byte bosses have given authoritative answers. After finishing this set I believe everyone can find a satisfactory job based on the interview information.

Guess you like

Origin blog.csdn.net/weixin_50829653/article/details/133171356