Big data statistics points teach you to write test cases

image

With the continuous improvement of products, each Team has higher and higher requirements for testing. It requires rapid iteration and ensures that the quality of the version does not decline. It has to be said that this is a pain point of testing.

It can be seen from the construction of accurate test coverage that we have been racking our brains for this kind of problem, and have been thinking about whether we can find breakthrough points more accurately from the use cases. After continuous practice, I found that the cornerstone of accurate testing is the streamlining of use cases.

From the incremental phase, FT integration phase, mainline integration phase to the pre-launch phase, the number of use cases determines the efficiency of testing. I think the use cases need to be streamlined and precise, only in this way can we buy more time for the project team with the highest efficiency.

So, how can the use cases be streamlined and not redundant, and the main path to the user can be controlled? There should be applause here! ! Don't miss it when you pass by, there is only one store, no other stores, please continue to look down.

The summary is written in front:

If you tell you the user's real path, will you still question whether you can control the user's main path? The most direct and violent way of expressing user paths is statistical points. If you can use big data statistics to map user operations, wouldn't you have a whole set of accurate use cases in front of you?

 

 

1. Introduction to statistical points

When it comes to the meaning of statistical points, here are more statistics on the various tall goods. In other words, the statistical point is to count the points buried in the user's behavior and operation. I have to say that in order to learn the various indicators of the statistical point, I learned from the product with the good quality that everyone is a product manager.

Here is a summary. Why are statistical points suitable for use case mapping?

1. There are many points, which proves that users use more and can reflect the real operation of users

Count the pre-buried points of user behavior, as long as the user clicks more, in other words, the user operates more here, according to the order of the statistical points, you can roughly determine the general content of some use cases. In the previous selection phase, the use cases were selected based on their own perception of users, but now they are backed by millions of data with leverage, which is particularly reliable.

1.2 The version statistics point data of each module tends to be stable

Basically, unless the statistical point data of the new content of the version is not easy to evaluate, the statistical point data of the old function has always tended to be stable, and the buried points of the statistical points of the products are also distributed by the funnel model, which is filtered layer by layer; users The number of usage data is robust and traceable, and the test path is very clear and clear. In addition, it is easy for product managers to raise the number, with low threshold, quick to raise the number, and the data can be reused.

1.3 Penetration rate = number of feature clickers/users

Having said so much, statistical points have more index data. Which index should be selected for sorting? I selected daily active-total user penetration rate, butler user clicks on the function / total number of daily active users. Other indicators include daily activity-version upgrade user penetration rate, daily activity-new user penetration rate, daily activity-old version user penetration rate, etc. I have compared these rankings of penetration rates, and the indicators will not differ greatly, but from the comparison of base numbers, the total daily activity is relatively reliable and the base number is large.

1.4 My heart’s words: A deeper closed loop of the product, making full use of user data

Closed loop is the top priority of the Internet. How to make full use of the hard-earned operating data is a very deep knowledge. At present, the main process of product production is in accordance with: product -> development -> testing -> operation -> product, as shown in Figure 1.1. If the product data can be used to simulate the real operation of the user, it is to achieve a deeper closed loop of the product .

image

 

2. Practice

I took the virus killing module as a specific practical module, and selected a pre-launch use case as an example. Before the virus is checked and killed, it has been processed, the statistical points are particularly clear, and it is filtered layer by layer, which is the most suitable module for practice. After extracting a piece of statistical point data of the operation type from the product, you can start work when you get the data. Only three steps are required:

Step 1: Sort and select statistical data with a penetration rate greater than 0.02%

Sort the operational data by daily active-total user penetration rate, and select a penetration rate greater than 0.02% for filtering. Why is the data greater than 0.02% selected here? On the one hand, the penetration rate is already relatively low, 10,000 users Only two people clicked on this function point; on the other hand, for the pre-launch, the particles can not be particularly large, more than 0.02%, there are about 40 statistical points; the last is the most important, and all the statistical points are left. The combined penetration rate does not exceed 10%, which means that you have actually controlled 90% of user actions. 

Step 2: According to the statistical point pairs, which three-level modules are roughly composed of

The combing of statistical points is particularly important, because several statistical points can be covered by one use case. Moreover, the combing operation does not need to be repeated every time, combing once can benefit the whole life, because the statistical point penetration rate data does not change much in each version. 

Step 3: Write use cases, merge use cases, don’t click to the end, combine with real scenes to restore

Compile use cases according to the combed virus killing statistics points content, covering as many statistics points as possible (if you forget the content of the statistics point operations, you can ask the development, sometimes with a long history, you will inevitably encounter this problem). The link of writing a use case is particularly important. Is it true that it will stop until the statistical point is covered? No, combine the actual scenes and restore user operations as much as possible. For example, when users report statistical points, they need to assume that they are users and fill in information.

Because some failed statistical points have been simulated during the FT incremental module and FT integration, there is no need to pay too much attention to this part of the content before going online. 

3. Earnings

Although the use cases of each stage of the virus detection and killing module have been sorted out clearly in the use case streamlining process, there are still 51 use cases before going online, which takes at least 2 hours per person, and the coverage quality cannot be fully controlled. At present, there are 10 pre-launch use cases mapped out. It takes about 20 minutes/person to complete the pre-launch test, and it can cover 90% of user behavior operations.

Software testing exchange group: 785128166

WeChat public account: Programmer Erhei; After paying attention, you can receive a set of video resources for free; explain in detail: python automated testing, web automation, interface automation, mobile terminal automation, interview experience and other related content, the value of learning resources depends on you Action, don’t be a "collector"

Here is a collection of selected dry goods articles for functional testing:

Dry goods sharing | Featured article collection of functional tests (Are you afraid that you can't find the article you need?)

Guess you like

Origin blog.csdn.net/m0_52668874/article/details/115248111