What is the idea of automated testing?

Introduction to automated testing

Automated Testing refers to the process of transforming human-driven testing behavior into machine execution. In fact, automated testing often uses some testing tools or frameworks to write automated test cases to simulate the manual testing process. For example, in the project iteration process, continuous regression testing is a very boring and repetitive task, and testers cannot grow at all under the daily repetitive work. Carrying out automated testing at this time can help testers be freed from repetitive and boring manual testing, improve testing efficiency, and shorten regression testing time. Generally speaking, automated testing is usually used in conjunction with continuous integration systems (such as Jenkins).

However, in the process of automation practice, it is often found that there is a large gap between ideal and reality. The disadvantages of automated testing are mainly reflected in the following aspects:

  1. Compared with manual testing, automated testing has relatively high requirements for testers;
  2. Test cases need to be updated according to version iterations, which has a certain maintenance cost;
  3. Automated testing cannot be expected to find more new bugs, and automated testing can find far fewer defects than manual testing;
  4. The output value of automated testing often lies in long-term regression testing, and its role in the short term may not be obvious;

[Station B is the most easy-to-understand] Python interface automation testing from entry to proficiency, ultra-detailed advanced tutorials, watching this set of videos is enough

The problem you want to solve with the help of an automated process

  1. The test time is tight, the manual test may not cover completely, and it is easy to miss some borderline cases;
  2. When there is strong coupling between modules, it is difficult to find problems in depth when simply testing from the page;
  3. Regression testing requires a lot of manpower/working hours;
  4. Realize testing tasks that cannot be achieved by manual testing;
  5. By writing test cases, deepen the understanding of business/data, which will help to discover hidden problems in the next iteration;

Prerequisites for introducing automated testing

Long project cycle and infrequent changes in requirements.
The stability of test cases determines the maintenance cost of automated testing. If software requirements change too frequently, testers need to update test cases and related test scripts according to the changing requirements, and script maintenance itself is a code development process that needs to be modified and debugged. An automated test has failed if it costs at least as much as the cost of testing it saves.

Some modules in the project are relatively stable, while the requirements of some modules are highly variable. We can then automate the testing of relatively stable modules, while manual testing is still required for relatively large changes.

Automated test scripts can be reused
If a set of near-perfect automated test scripts has been developed with great effort, but the reuse rate of the scripts is so low that the cost spent during them is greater than the economic value created, automated testing is meaningless.

Testing tasks are difficult to implement manually
, such as stress testing, big data or a large amount of repetitive data testing, which must be supported by automated tools.

Ability to do automated testing

  • Possess coding ability
    At least be familiar with the code language of the automation tool/framework, preferably have a certain coding ability, and at the same time the code logic must be clear, otherwise not only the logic, business and robustness of the use case cannot be guaranteed, but also the efficiency cannot be guaranteed;
  • Familiarity with the system under test;
    familiarity with the system under test is the minimum requirement for any tester;
  • Master an automated testing framework/tool;
    you can learn an automated testing framework based on the code you have mastered, such as Selenium/Appoum/Robot Framework/Nunit/TestNG, etc.;
  • Keep learning, be good at learning, know what it is and why it is;
    "If you fall behind, you will be beaten."

At what stage is the automation use case usually completed?

Generally lagging behind the manual testing phase of new functions, automated use cases can be supplemented after the execution of manual use cases is completed or the functions are launched.
Automation is not to follow new requirements, but to measure the impact of changing things on things that remain unchanged. You must not do automation work for the sake of automation.

Layered automated testing

Before understanding layered automation, let's take a look at the classic test pyramid.

  • UI layer: interface automation testing. It can be seen that its value is the least, it is closest to the real scene of the user, and it is easy to find problems, but its implementation cost is the highest and it is too easy to be externally dependent, and it is easy to affect the script success rate. Generally speaking, proper interface automation testing is necessary, but there is no need to invest too much in the UI layer;

  • Service layer: Interface automated testing. Its value is centered and it is more appropriate to cover most of the major interfaces. This layer requires testers to be very clear about the structure of the system and the scheduling between systems, and at the same time understand the logical relationship of the interface, otherwise the interface test code is easy to miss some abnormal scenarios;

  • Unit layer: unit testing. The most valuable test, but the requirements for testers are relatively high, generally completed by developers, otherwise only pair programming can be used.
    Generally speaking, manual testing is the most basic and can be close to 100%, while for automated testing, it is more like a "bullet armor" used to protect the main parts of the body. Some people think that manpower can be saved by increasing the automation rate, but this is actually very one-sided, because increasing the automation rate means that more manpower needs to be invested in maintenance costs. Because the requirements of the system are constantly changing, every change will cause the automated test cases to be updated and adjusted.

    Therefore, what kind of automated testing is good should also be analyzed in combination with the above test pyramid. For automated testing at the UI level, it is enough to ensure a small number of necessary main processes. Do not turn the "bullet armor" of automated testing into a bloated "spacesuit" at this level; automated testing of interfaces at the Service level can be considered to cover a large Part of the process; 100% single-test at the Unit level is the best. Even if there is a change in demand, it generally rarely affects existing use cases. Generally speaking, unit testing can find 80% of the defects.

Principles for Designing Automation Use Cases

The basic principle

  • The scope of automated test cases must be relatively core business processes, that is, core test points covering main functions and modules with a high repetition rate;
  • It is very important that the results of the test cases should be stable when both the test script and the code under test remain unchanged;
  • Unless it is necessary, any use case should avoid doing persistent operations to ensure that the environment is always clean;
  • Once Written, Run Anytime as Desired ;
  • Not all manual test cases can be implemented using automated testing. Automated testing cannot replace manual testing. The effective combination of the two is the key to ensuring project quality.
  • In the regression test scenario, the selection of test cases is generally based on the positive direction, supplemented by the reverse direction;

Use Case Design Principles

Keep Case Independence

Generally speaking, a Test Suite contains a group of similar or related Test Cases. And each Test Case should only test one scenario. Depending on the complexity of the case, different scenarios can be different. It can be a unit test or an end-to-end test (E2E). Of course, there are special ways of writing such as work Flow-tested and data-driven.

What are the points that need to be paid attention to in the independence of Case?

First of all, the Cases in the Test Suite should not affect each other during execution, which means that when we randomly run one of the Cases or run these Cases out of order, the test results should be accurate. Suite level and Directory level should also pay attention to the issue of independence. When the system is relatively complex, hundreds or even thousands of Cases may be run together. Robot itself does not specify the order in which Cases are executed, so to some extent, Cases at the same level are executed randomly. A typical situation is that the test case will fail when all Cases are run together on the server, and it may be sporadic. In this case, it is likely to be due to traces of other Cases. Affected by it, it is often time-consuming to find the root cause of the problem.

Keep the portability of Case

Case's portability mainly considers three points: Case's dependence on the execution environment; Case's dependence on external devices; Case's dependence on test objects.

Case's dependence on the execution environment
Minimize the dependence on the execution environment. To give an example, after you use the rf framework to write and debug use cases on your local PC, upload them to Git, and then your leader may pull your use cases to run locally, and then deploy them to the continuous integration server. So when you write use cases, try to avoid using libraries or shell commands on different platforms.

To give another example, if you modify the source code of the test library due to business needs, no matter whether it is other people in the group or the CI server, it will definitely fail to run. How to solve this situation? Here are two solutions:

  1. Make the modified library into a test library, upload it to Git or Pypi, and the other party can install and update it through pip;
  2. Use robotremoteserver to make a shared library on the remote host

Case's dependence on external devices
Sometimes for business testing needs, we will introduce some external devices to assist in testing. External devices may be continuously upgraded or replaced. When writing use cases, we need to consider how to use a set of Cases to be better compatible with these tests. equipment. For example, the operation of the external device can be extracted from the test case and encapsulated into a test library or keyword;

Case's dependence on the test object
If the test object is a software platform, the software platform usually needs to be adapted to a variety of devices, and the hardware configuration of the device may be various: the performance and quantity of CPU, memory, and components may be different. The dependence on the test object should not only consider the executable on different devices, but also consider the test coverage. Due to the increase of device components, your use cases may not be able to cover these components, or may not be able to capture a certain performance bottleneck, so the reliability of the test results is also greatly reduced.

Improve Case Execution Efficiency

The execution time of different cases is very different, ranging from a few seconds to several days. There is no comparison between a simple functional test case that takes seconds and a stability test case that takes days. But when we look at a certain case or a certain group of cases, we need to pay attention to the execution efficiency of the Case. Both agile process and continuous integration pay attention to fast feedback. Developers can quickly get feedback on test results after submitting code, and testers can perform a wider range of test coverage in the shortest time, which can not only improve the team's work efficiency , can also enhance the confidence of the team.

Taking the use of rf as an example, the following aspects can be used to improve the execution efficiency of use cases when writing use cases.

1. If there is a check on the execution conditions, if the check fails, exit the execution as soon as possible;
2. Extract data preparation or environment cleanup into keywords and put them in a higher level. Some combinations may be required during extraction. However, repeated creation and deletion operations are not allowed;
3. There should be as few sleeps as possible in use cases, and it is recommended to use "wait until ..." instead;
4. Concurrent execution of use cases can be used to improve efficiency;

Automated use case writing specification

Naming conventions

Keyword naming

The first word should begin with a lowercase letter, and subsequent words should begin with an uppercase letter. Such as: getProjectId, connectDB

constant naming

The name of the constant should use capital letters, and indicate the full meaning of the constant. If a constant name consists of multiple words, the words should be separated by underscores. Such as: MAX_CHAR_LENGTH

parameter naming

The naming convention of the parameter is the same as that of the method. Please make the naming of the parameter as clear as possible while keeping the parameter name as one word. Such as: ${account} , ${investorName}

Use Tags

RF provides a way to manage use cases by setting tags in Settings. The application of Tag is very extensive and flexible, for example, it can be used for use case screening, version management, statistical strategy, etc.

How to make tags look more convenient?

  • You can tag the folder name under each folder, so that you can run the use cases under the folder independently according to the tag, and view the test report better;
  • Put a tag on some important use cases, and you can run key use cases alone;
  • If you don't want to execute some use cases, you can tag them and set them not to execute.

Let the case be documented

When considering the Coding Style, we can set some fixed rules. As long as you follow this rule, the Coding Style will tend to be unified after a few times of practice. Considering that Case is written like a document requires more subjective initiative.

The development of agile development (Agile Development) in China has become more and more perfect, and it is accompanied by agile testing (Agile Testing). Agile thinking emphasizes people as the core. In the entire development process, only necessary documents are written or written as little as possible. This is also the difference between it and the traditional waterfall model.

In order not to cause misunderstanding, it is necessary to insert a few characteristics of agile testing here:

  • Agile testing should be part of agile development;
  • Agile testing has distinct characteristics of agile development, such as test-driven development (TDD), acceptance test-driven development (ATDD). In other words, unit testing is the basis of agile testing. If there is not enough unit testing, it will not be able to cope with the rapid iteration of future requirements, nor can it achieve fast and stable continuous delivery;
  • Excellent agile testing is based on automated testing;
  • Agile testing is everywhere and everywhere.

Requirements design is constantly updated, but documents are often not updated in a timely manner. In this case, how can testers quickly grasp the requirements and current status of a certain function or product?

"Tests as Documentation."

Clear and understandable use case names

In actual projects, we may create a new directory to store test cases with similar test points. Each Case corresponds to a test point, and the use case name should summarize the core content of the corresponding test point, so that when we browse a set of use cases, we can get a general understanding of the test content only through the use case name, and it is also convenient to find a certain test point. Case.

Clear and understandable use case names

In actual projects, we may create a new directory to store test cases with similar test points. Each Case corresponds to a test point, and the use case name should summarize the core content of the corresponding test point, so that when we browse a set of use cases, we can get a general understanding of the test content only through the use case name, and it is also convenient to find a certain test point. Case.

Guess you like

Origin blog.csdn.net/dad22211/article/details/131883329