Positioning of automated testing and some thoughts

When everyone understands automation, they first think of Web UI automation. This is why when I talk about automation, many people in the company generally object because the cost of automation is too high. In fact, automation is divided into three levels (UI Layer automation, interface automation, unit testing), not every level of automation is out of reach. The following indicates the difficulty of these three levels (this is called the automation pyramid on the Internet):

        Basically, it is certain that unit testing is the lowest cost, the easiest to promote, and the most effective. However, many companies will not invest in this area. The reason is that most companies now have a shortage of talents, especially experienced ones. R & D personnel. You said that there are not enough people involved in the actual development of the project, so how can you spare the relevant manpower to do unit testing? Judging from the current composition of test teams in most companies, there are basically no people who can do unit testing (some of them are also drawn to do development). This is why everyone agrees that unit testing belongs to development responsibilities (except for themselves, who do not have the ability to do unit testing). People can do it).

        If unit testing cannot be done, can interface (API or Service) automated testing be done? This can still be done as long as there is a certain technical foundation. At least some testers can do interface testing (in other words, performance testing is a kind of interface automation). If you can independently develop or directly introduce a set of decent interface automation tools or Framework (in terms of tools, there are many on the market), then this part of the work can be carried out. I believe that most companies can do good automated testing, which should also be based on this layer (so we recommend that if conditions permit, Automated testing can be done at this level first, of course if there are so many interfaces or services that need to be tested). The interface automation testing framework should have the following functions, depending on the complexity and respective needs:
        1. Verification
        is easy to understand. If there is no verification and the interface is simply executed, there will be no testing. Therefore, supporting return value verification is a necessary function.
        2. Data isolation
        Data isolation means that the specific request interface, parameters, verification and other data are isolated from the code to facilitate maintenance. Once the interface use cases need to be adjusted or new interface use cases need to be adjusted, the location can be found quickly. The other part of the isolation is One benefit is that it is reusable. The framework can be promoted to other teams. Users can use the same code and only need to fill in their respective use cases as required to test it.
        3. Data transfer
        After data isolation and maintainability are achieved, data transfer is another more important requirement.
Data transfer means that parameters can be passed down between interface use cases. For example, we create an order through the create order interface, which will return an order number. Next, we need to call the interface for querying the order, and from the returned data Verify with the data in the create order use case. At this time, the request data of the second interface needs to be extracted from the return in the first interface use case. Such examples abound, so supporting data transfer is another essential feature.
        4. Dynamic functions
        In actual use case scenarios, we may have requirements such as randomly generating a mobile phone number, string encryption, etc. After the data is isolated from the code, at this time we need the code to support the execution of the corresponding function for filling when the corresponding keyword is identified. For example, when you fill in nowTime() in the data, it will be replaced by the current time when executed. When you fill in random(5), it will be replaced by a five-digit random number, etc.
        5. Configurable
        Sometimes, our requirement is that the use case can not only be executed in one environment, but may require the same interface use case to be executed in multiple environments such as QA, pre-release, and online. Therefore, the framework needs to be configurable and easy to switch. Different configuration files can be called to execute in different environments.
        6. Log
        The log contains key information such as the specific execution interface, request method, request parameters, return value, verification interface, request time, time consumption, etc. The advantage of the log is that it can quickly locate problems with new use cases. Fill in where there is a problem. Secondly, when a bug is discovered, it is convenient to provide data to the development feedback. The development can quickly locate the problem from the trigger time, parameters and other information.
        7. Visual report
        After the use case is executed, it is time to show the results to the team. A visual report can make it easier for team members to understand the number of successes, number of failures and other data for each automated interface use case execution.
        8. Use case drive
        (1) The drive mode of use cases involves how to store test data, how to describe use cases, and how to reuse them; (2)
        Considering efficiency, it must also support concurrency;
        (3) Of course, the test report cannot just record success and failures, as well as statistics on various values ​​such as use case execution time, interface call time, and scenario pass rate.

        After talking about the automation of unit testing and interface testing, let’s now talk about UI layer automation testing, which has always been very popular and is also the first concept of automation. After all, there are many mature automation tools on the market, such as QTP, Selenium, etc. This automation must involve the testing team. Even if the automation framework is completed by development, the specific testing work is also fully participated by Tester. UI layer automated testing is really not easy to implement. No matter how perfect the automation framework is, the cost of maintenance in this area is also very high. Especially people who know development do not know testing, and people who know testing don’t know development. This contradiction The internal consumption caused by this phenomenon is quite a lot. In addition, project requirements and UI layer are changing frequently, and Web UI technology is becoming more and more complex and diversified (UI layer automation needs to be based on object recognition technology), which has led to many The company is unwilling to invest in this area. Even so, as a motivated tester, we need to think more about this area. After all, this is the closest "technological fertile ground" (one of the) for our testing.

       Now let’s focus on Web UI automated testing (most current systems are displayed through Web UI). Generally, the more mature automation tool solutions are roughly as follows: 1. Development language: Python or Java; 2. Open source testing framework
  :
  Selenium WebDriver;
  3. Web element positioning: Xpath+cssSelector+findElement or findElements method;

  In terms of specific implementation details, the focus is on the characteristics of Web UI automated testing, packaging of each layer, the idea of ​​​​divide and conquer, each is independent of each other, and the responsibilities are clearly defined. Here is a brief explanation:

  1. Test case business flow operation implementation and test data separation management;
  2. Page element positioning and page element operation separation;
  3. Visual log query system; 4.
  Cross-browser support such as: IE, Firefox, Chrome;
  5. Visual test reports, you can specifically query logs/screenshots, etc.;
  6. Implement data-driven management through Excel;
  7. Email sending management, you can customize the specific time and recipients, etc.;

        The above are some practical requirements for general Web UI automated testing. Of course, they are relatively simple. The more complicated thing is to implement platform management. Testing engineers every day only need to select specific projects and tested test case sets, then execute them and output test reports. Emails are automatically sent to relevant development/testing, and the development and maintenance of the framework can also be continuously integrated.

        After talking about these three levels of automated testing, let’s analyze which level of automated testing should be prioritized and which one has the highest input-output ratio. Let’s share part of an online article below. Someone has already given the answer. :

        As we all know, the marginal cost of software testing will increase as the defect detection rate increases. This is the economic embodiment of one of the basic axioms of software testing, "the inexhaustibility of testing." This rule also applies to automated testing, which means that as automation coverage increases, the cost of automation also increases exponentially. Following this idea, we can analyze the automation cost curves of unit testing, integration testing and UI testing, as shown in Figure 2. Consistent with common understanding, in order to achieve the same automation rate (x0), the cost of UI is the highest, followed by API, and Unit is the lowest.
  There is another famous theory in economics called diminishing marginal returns. As an investment, as the amount of investment increases, the unit return brought by the unit investment increment is getting smaller and smaller, and even after a certain critical point, the return may be negative. And this zero point is the point where investment returns are maximized. All investments before this point can amplify total returns, while continuing to invest after that point is less wise.

Figure 2 Automation cost/benefit curve

   According to this idea, in Figure 2, three zero boundary points can be obtained for three different types of automated tests. The highest total revenue is in interface testing, followed by unit testing, and UI testing is the lowest.
  From the perspective of testing effects, interface testing has many advantages compared with UI/unit testing. For unit testing, usually unit testing is testing the code, while interface testing is testing a live, deployed system. In addition, a single interface test can also cover more code than a single unit test case. More importantly, interface testing can also be business-oriented testing, where business-level testing is performed through the interface.
  Compared with UI automation use cases, interface testing is simpler and more direct, and its execution efficiency is higher. Except for some enterprise-level application software, where many businesses may be performed on the front end, in many cases, most business operations completed through the UI can be completed through the API side. In some cases, the test condition coverage of the API (interface) can even be more than that of the UI.
  Based on the above analysis, the author believes that in the initial stage of automated testing, the automation mode suitable for a well-off test team should be the largest interface in the middle layer, and moderate implementation of UI and Unit tests at both ends. Graphically, it looks like an olive shape (the interface in the middle has the highest test efficiency ratio). If you add some manual testing, it will be a tumbler. (Digression: Interface testing can be done by the development team, or it can be carried out by the testing team)
  According to this model, most of the automation investment is used for interface testing, which can achieve the highest return on investment. Combined with best practices such as continuous testing and continuous integration, test cases, test frameworks or platforms are shared among teams. Through interface testing, a connecting test type, the cupcake model can be gradually overcome from the bottom up. That wall. (Note: Cupcake mode is an anti-pyramid automation mode. Development and testing are carried out independently, and testing is carried out linearly. Parallel collaborative testing is not possible. It is equivalent to a department wall, a self-testing link for development and a testing link for testing. No correlation and resource sharing---duplicate testing, inconsistent measurement goals, excessive automation)

[Latest in 2023] Python automated testing, complete 60 practical projects in 7 days, and provide you with practical information throughout the process. [Automated testing/interface testing/performance testing/software testing]

Finally, I would like to thank everyone who reads my article carefully. Reciprocity is always necessary. Although it is not a very valuable thing, if you can use it, you can take it directly:

Insert image description here

This information should be the most comprehensive and complete preparation warehouse for [software testing] friends. This warehouse has also accompanied tens of thousands of test engineers through the most difficult journey. I hope it can also help you!   

Guess you like

Origin blog.csdn.net/YLF123456789000/article/details/133171770