A little insight about automated testing

In the past year, the main focus has been on automated testing. The first half year was the automated test of the mobile client, mainly the smoke test. There were about 200 use cases per platform, and an average of 3 people per platform. It took half a year to complete the daily system test. The next half year is maintenance. The bigger problem is that the support of the test platform is not timely enough, and the loss of the real machine is very large. Moreover, a shortcoming of client-side testing is that the test coverage is very small. The smoke test is simply the most basic functional test. It is generally a normal use case, and rarely involves an abnormal use case. For an app that is still adding new features, most of the work is still done manually by testers. If it is a very stable APP, like QQ, a large number of client-side tests still have fairly objective benefits.

 

In the second half of the year, I studied background automation testing. A few years ago, I also did a CGI interface test of a web project. It was done with Jmeter, using the http protocol. The threshold is to learn the use of Jmeter. It is not complicated, and the application effect is also good. It can be used not only for daily testing, but also for for online monitoring. The current APP is a live broadcast application. The background is divided into an access layer and a logic layer. The access layer is the TCP protocol, which is responsible for authentication and forwarding. The logic layer is a custom UDP protocol, which is responsible for most of the logic processing. And the content is binary. So it is not suitable to do it with Jmeter. Here choose to write your own Python-based test script. Python has a wealth of third-party libraries, which can be implemented for sending and receiving packets, grouping and unpacking. And you can freely customize it according to your business characteristics. The disadvantage is that there are certain programming thresholds, and the self-developed framework is not as simple and easy to use as the mature framework.

 

Having said all that, this article is not an introduction to automated testing experience. Rather, a warning to testers who think that implementing automated tests will solve their problems:

 

1. Automated testing is very labor-intensive and costly (my experience, it basically takes more than half a year to see results). Whether to do automated testing depends on many factors: product development stage, team capabilities, ROI, etc. The most important thing is to solve the pain point problem. Automated testing is essentially tool-assisted testing, which partially or completely replaces a manual test. The purpose is to allow testers to have more time to solve problems that cannot be solved by tools.

Suggestion: Clarify the goal of automated testing, and advance automated testing incrementally.

 

2. Background testing is generally divided into access layer testing and logic layer testing. Whether automated testing is implemented at the access layer, the logic layer, or both, is a problem that testers need to consider. The background architecture is relatively simple, and it rarely relies on third-party products. It is more common to do automated testing of the access layer; for products with more complex background architecture and more third-party dependencies, it is easier to automate the logic layer. The reasons are similar to how unit testing can bring us more benefits:

a. Faster: A test that can be run with a small amount of code is much faster than calling multiple components or even the entire application

b. Good scalability: The test of the logic layer is relatively independent and can scale linearly with the scale of the application. In the test of the access layer, there will be dependencies between multiple components, and there will be scaling problems, which will lead to testing The maintenance cost of the use case increases non-linearly.

c. The test cases of the logic layer are related to specific code blocks, which can make it easy to test only the changed code, and can provide quick feedback for development.

(ps: Reference source for the benefits of unit testing http://www.infoq.com/cn/articles/design-for-testability)

 

3. Don’t automate for the sake of automation. Automated testing is sometimes not the optimal solution.

Take an example that happened in the last few days: there is an online product, and the merchant will make some updates from time to time (on average less than once a month). Because the update did not follow the specifications, the page opened a problem. In response to this problem, the PM proposed that monitoring measures should be added here. After the development and operation and maintenance discussions, he felt that only the UI-based automated testing solution could be used (because the login state is required). The testers of the product first consider whether the solution has the technical ability to implement it, and whether there is time to do it. Things have developed here, is there a problem? have! Consider the UNPHAT principle (explained in another blog of mine http://sharley.iteye.com/admin/blogs/2382789): don't rush to find a solution until you fully understand your problem. I reminded the tester to further dig into the root of the problem, so as to shift the solution of thinking from monitoring results to monitoring changes, so there is a candidate solution, monitoring configuration changes + manual detection of pages, in the case of extremely low frequency, this can be regarded as A low-cost solution, going one step further, can add intelligent automatic detection, which also depends on ROI. Although the former requires some manual intervention, the labor cost is basically negligible. It is best to let non-testers verify (just to see if the page is displayed normally, there is no logic, and non-testers can also do well).

 

 

 

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326118241&siteId=291194637