What are we talking about when we talk about automated testing

Please allow me to borrow the title of Raymond Carver's famous 1981 book "What We Talk About When We Talk About Love" as the title of the fourth article in the agile testing column. Of course, please forgive me for muttering private things under this great title.

  Agile has no formula to follow. Due to differences in culture, products, and users, different organizations naturally have great differences in the specific practices of agile. This is the norm in agile implementation, but recently I have come into contact with many agile organizations. The test managers and test engineers in China often disagree about how to do "good testing" in agile, especially for automated testing, and the views are even more different.

If you want to learn automated testing, here I recommend a set of videos for you. This video can be said to be the first interface automation testing tutorial on the entire network at station B. At the same time, the number of online users has reached 1,000, and there are notes to collect and various Lu Dashen Technical Exchange: Click on the card at the bottom of the article

The most detailed collection of practical tutorials for automated testing of Python interfaces (the latest version in actual combat) taught by station B Including: 1. Why interface automation is needed, 2. The overall view of interface automation requests, 3. Interface automation, interface combat, etc. For more exciting videos, please pay attention to the UP account. https://www.bilibili.com/video/BV17p4y1B77x/?spm_id_from=333.337.search-card.all.click

  In fact, I have discussed automated testing in agile testing before, and given some automated testing strategies and methods that I think are appropriate. Why do we need an article to discuss automated testing? There must be more than one specific approach to automated testing, and there are as many automated testing techniques and methods as there are in the river, but when we talk about automated testing, what information do we want to convey?

  Cockburn mentioned in "Agile Software Development" that "the success of communication depends on the sender and receiver having a common experience that can be quoted", so when we refer to automated testing, we must have a common vocabulary. First, what is automated testing? For some people, when it comes to automated testing, the first thought that comes to mind is automated testing tools such as QTP, WebDriver, and automated testing scripts for functions or UI based on these tools, but in fact these are just It's just a small part of automated testing. In my opinion, all operations that "use the capabilities of the machine" to automate some or all of the testing should be called "automated testing".

  Whether it is the unit test established for the class or method in the code, or the diff method established for the product that requires manual participation, it is a specific method of automated testing. For testing, automated testing is not an existential goal, but only a means to achieve the testing goal. What do we hope to gain from automated testing? Automated test coverage? Automated test coverage is a means to measure the proportion of automated testing, but this is still not the goal of our automated testing in the project. We carry out automated testing in the project, and the goal can only be "to improve the quality of testing or products through automated testing." Therefore, when talking about automated testing, what can be called goals can only be the following:

  1. In the case of ensuring the same test coverage, reduce the investment of human resources;
  2. In the case of the same human resource investment, the test coverage rate is increased;
  3. Let the execution of the test move upstream to help developers find problems in the product earlier, thereby reducing the cost of repairing.

  With the goal, all that remains is how to do it. "Reducing the investment of human resources" is the easiest automated testing goal to be intuitively understood. Therefore, the approach of many teams is to "replace manual testing with automated testing", turning the test cases that originally required manual execution into UI automation test cases. , and the automated testing tool executes the script instead. It cannot be said that this method has no effect, but because the effectiveness of automated testing in identifying defects is far lower than that of manual testing (from the perspective of UI testing, a non-"expected" defect is difficult to be found by automated testing, Manual testing can easily find this defect), and due to the variability of the UI itself, in order to achieve the same coverage as manual testing, pure UI automation testing is often difficult to prove its return on investment.

  "Increased test coverage" is another area where automated testing can yield benefits. Examples that come to mind are performance testing and fuzz testing. Rely on the ability of the machine to simulate the concurrency of a large number of users, and rely on the ability of the machine to generate a large amount of random data based on rules and send it to the server to detect possible security problems of the server. In these areas, automated testing can bring obvious benefits .

  However, when I say "increase test coverage", what I want to talk about is not just performance testing or fuzz testing, which verifies a certain quality attribute of the application. When it comes to "coverage", another scenario in my mind is "to cover more interfaces and services in the application through automated tests"-that is, to cover those areas that cannot be covered by UI and functional tests through automated tests. or hard-to-reach parts. This type of automated testing can be compared to the testing methods and methods in integration testing, focusing on the verification of interfaces between subsystems or modules.

  However, as an automated test, the focus is not only on the establishment of verification scripts for interfaces and services, but also by promoting the rationality of interface and service design, so as to establish as many and complete automated tests of interfaces/services as possible become possible. Suppose we want to perform interface/service-level automated testing on the Frontend of a web application. Obviously, there are many interfaces and services that can be decoupled in this Frontend application. If Frontend adopts a development method similar to Clearsilver+Java, theoretically, there is a natural seam here: Clearsilver, as a "template system", mainly provides the ability to display pages; Java code is mainly for business logic implementation; The Java code generates the page that the user sees by filling in the dynamic data items in the template.

  In this mode, if the application can be promoted to have a good layered design, so that the Clearsilver side does not contain logic implementation at all, and can directly access all the dynamic data generated by the logic part (Java code) in some way, then, in automation In the test, you can consider using a completely different way to verify the template part and the Java code part: use the browser to display the template filled with data, test and diff based on image comparison; use data verification or diff to verify Java logic implementation correctness. In this way, further layers of verification and testing can be established for the application. This part of the automated test design and implementation is usually accompanied by testability design. Find possible test points in the application, improve the testability of the application, and then establish more automated tests for different tests for the application. This is the general way to "increase test coverage".

  "Pushing tests upstream" refers to pushing the execution of tests into the development and design phases. Continuous integration is a method of "executing testing upstream". This method verifies the submitted code on a regular basis (in days or hours) or triggered by code submission, and can provide development engineers with Code quality feedback. This method can ensure that defects are found as early as possible, and since the test executor and the corrector are the same person, the overhead caused by communication can be greatly reduced.

  In my opinion, something like continuous integration is where automated testing can bring the most benefits. But in order to push the test upstream, it is not enough to establish a continuous integration framework. If the development engineers need to spend a lot of effort to create, execute and maintain the test, it is obvious that they will not accept this kind of "executing the test upstream". ideal. Both Android and iPhone provide automated testing frameworks (Instrumentation on Android and UI Automation on iOS). In theory, development engineers can create, execute and maintain automated tests for their own applications.

  But if you look at the cost of development engineers when performing these tests, you will find that it is really difficult for development engineers to accept automated tests for applications on Android and iPhone maintained by development engineers. Taking automated testing on iOS as an example, if a development engineer wants to execute a test on a real device, he/she must first find an iPhone device and connect it to his own machine, compile the application to be tested, and use iTunes to The application that needs to be tested is synchronized to the iPhone device, and then the IDE environment of UI Automation is opened, the test script is loaded, the test is executed, and the test results are manually checked to see if the test results are correct—I bet that no development engineer can persist unless it is an independent developer Executing automated tests in this way for more than a week, the development engineer must throw away this work to the test team like a hot potato.

  So, how do you push tests upstream in this case? The main bottleneck here is that the test execution consumes too much. If you can build a mobile device laboratory and unify the test execution of iPhone and Android into one command line, you only need to specify the binary of the application to be tested, the device that needs to run the test, The test scripts that need to be run and the place where the results are stored can directly tell the development engineer the execution results of the test through this command line-in this case, compared with the maintenance and creation costs of automated tests, the benefits brought by automation are greater Much higher than the investment (able to get immediate feedback, able to repeat the rapid execution of tests).

  "Push testing upstream" is not a specific automation testing technology, but it is a very worthwhile direction of automation testing. In the above example of iPhone and Android testing, "creating an automated test execution environment that can make test execution cost-effective" is the best choice in this case to "push tests upstream", which may need to be resolved in other cases But there are other problems. In any case, adhering to the viewpoint of "pushing testing upstream" and creating the possibility of pushing upstream through automated testing is, in my opinion, a way for automated testing to continue to bring benefits.

  Automated testing is not a panacea, but it is absolutely impossible without automated testing. Like money, automated testing is indispensable, the more the better, but it cannot be expected to solve all problems with automated testing. What problems are not suitable for automation? Automated testing technology relies on the ability of machines for test execution and verification. Therefore, those that are not suitable for machine execution and verification are not suitable for using automated testing technology: for example, some operations that require manual experience, the fluency of audio and video Judgment and so on; In addition, in terms of income, there is no stable demand and UI, which is not suitable for automated testing; or, in an agile development environment, testers need to use software and explore software. New features for each iteration to explore and learn from.

  In these cases, we usually need to rely on manual testing methods, using techniques such as exploratory testing to help us understand the application, find defects in the application outside of the script, and fill in the gaps that cannot be reached by automated testing.

  Where should automated testing be done? A simple judgment method to help you decide whether to introduce automated testing somewhere is: 1. Is it technically possible to be automated here? 2. If automated testing is introduced, will it bring benefits? If both answers are yes, then start your automation without hesitation.

Guess you like

Origin blog.csdn.net/caixiangting/article/details/131331062