Focus testing by understanding how users use the product

When we are not sure where to focus on testing, or what kind of testing should be done, please analyze how users use our products. Because knowing these can help us improve our testing efforts. This article explores examples of how to leverage user data to guide optimization of UI automation and compatibility testing.

 

Example 1: Project A, UI automation testing

 

In July 2016, in the department project A, our background interface automated testing can basically meet the needs of daily work, but the client-side testing still relies heavily on manual testing. It is time to start UI automation testing. UI automation testing is quite expensive. We chose a set of test frameworks within the company and used Python scripting language familiar to team members to write use cases. The test framework has good support for the management of use case sets, the execution and maintenance of use cases. , and can access the company's integrated publishing platform at low cost. It took us half a year to realize the number of 200+ use cases for each of the iOS/Android dual platforms, and to build daily tests every day.

 

Entering 2017, we are faced with new problems: as the saying goes, it is easy to siege a city and difficult to defend, and how to continue automated testing with low cost and high profit is a difficult problem. We are very aware that the time cost of adding a new automation use case script is limited, but the maintenance cost in the later period cannot be ignored, and the number of UI automation use cases is not the better. Where should automation be made? How do you make sure you don't over-automate? We must answer the above questions before moving on. After thinking about it for a while, we decided to go back to the user to find the answer.

 

As a service company, products are provided to users in the form of services, so all our work must be based on user value. The company requires every employee to be sensitive to the market and user needs at all times, and pay attention to user experience. , and continue to think and summarize from it, so that our work really revolves around the core of "user value", rather than working behind closed doors and admiring ourselves. As a tester, it is more necessary to understand how users use our products, and to formulate our testing strategies around the core product usage scenarios.

 

Every time a new feature is added to our product, there will be a corresponding report log to track the user's operation behavior. We collected the valuable parts and counted them. These data will tell us the most commonly used login methods, the most commonly used functions, and the most typical product usage scenarios. It also verified our guess that 80% of users only use the core 20% of the functions (the 28th law). This information can help us focus on the automation of those 20% functions. Besides automated testing, smoke testing, performance testing, etc. can all benefit from it.

 

Example 2: Project B, Compatibility Test

 

The products of Project B are released in many overseas countries and regions. The environment in which users use the products is affected by the development of software and hardware in the country where they are located, which is different from the domestic user environment. Testers test products in the office. Although they cannot truly simulate the user's environment, they still need to cover as much as possible to avoid problems from reaching users. A very important test type here is compatibility testing. For Web-type products, the browser compatibility test needs to be considered. We collect the proportion of browsers used by users, and select several browsers that account for 85% of the total for testing. For users with very low browser versions , it will remind that its version is too low, and recommend users to upgrade to get a better product experience. After a few months, some users have upgraded their browser versions. Under the premise of not increasing the test workload, the user experience is improved, and the coverage of compatibility tests is further improved.

 

The above strategy is invalid for the compatibility test of mobile platforms, we can't let users switch phones, so we need to use as many models as possible for our users to test. In this case, it is very useful to analyze the user model data. We regularly purchase the most popular models for testing. For the more popular models and long-tail models, we use the company's internal compatibility team and various cloud testing platforms and public testing platforms to cover as many models as possible. type.

 

For background upgrade versions, testers need to test the compatibility of several client versions on the live network to the background. By analyzing the proportion of the client version, we found that there are only a few dozen users of one version, accounting for less than 1% of the total, while the number of users of the other two versions exceeds 99%, so we adjusted the test strategy, for the user volume Less than 1% of the client versions simply check whether the basic functions are available, which takes no more than 10 minutes, while the other two versions are tested in detail according to the original test plan, and we have saved 1/3 of the original test time.

 

We often need to optimize our products based on user feedback, such as user feedback that using a certain feature is slow, and testers do find problems in the office. The traditional test scheme is to compare the performance data of the version before and after optimization to verify whether the expectations are really met. A better way is to count the user data on the live network. We know that the smaller the number of test samples, the lower the accuracy. The data obtained by the tester in the office is not enough to represent the entire user data. From a technical point of view, it is very easy. challenged. But the user's big data is different, which can truly reflect the user's experience. For example, 60% of the user data before optimization falls in the range of excellent performance, and 80% of users fall into the range of excellent performance after optimization, then obviously the optimization has obvious effect, improving the product experience of 20% of users.

 

User data can help technicians to carry out work rationally, and can also help verify work results. I look forward to more sharing of this content.

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326803472&siteId=291194637