Brief description of some common concerns in testing work

content

First, the writing process of test cases

1. Equivalence class

2. Boundary value

3. Cause and effect diagrams

4. Orthogonal arrangement

5. Scene design method

6. Bad guessing

2. Demand Q&A evaluation stage

3. Develop a test plan

4. Writing and reviewing test cases

5. Smoke test

6. Execute test cases

Seven, test report summary

Eight, non-characteristic test:

1. Load pressure:

2. Load stress test:

3. Performance test:

4. Load test:

5. Stress test:

6. Concurrency performance test:

7. Large data volume test:

8. Independent data volume test:

Nine), defect management


A test case is a very important document in the testing process. It is the core of the testing work, a set of standards for input and output during testing, and a specific comparison of software requirements.

First, the writing process of test cases

Requirement analysis -> extract test points -> test case writing -> test case review

Common design methods for test cases: (mostly designed with limited equivalence classes, boundary value supplementation, wrong guessing and scene optimization, non-large and complex projects do not consider the cause and effect diagram judgment table)

1. Equivalence class

Examples of teaching students in accordance with their aptitude:

In principle, teachers should designate a suitable study plan according to each student's own situation. But in fact, there are too many students who are too old and wet, so they can only be divided into several categories: top students emphasize the expansion of knowledge and the improvement of comprehensive ability ; Intermediate students emphasize on consolidating the foundation, checking for deficiencies and filling omissions; poor students emphasize on mastering key points first, skipping difficulties temporarily, thinking: the set of inputs is infinite, and cannot all cover the input (output will be considered in special cases) according to needs. For several equivalence classes, select a test case from the equivalence class. If the test case passes the test, the equivalence class represented by it is considered to pass the test, so that fewer test cases can be used to achieve as many functions as possible Coverage solves the problem of not exhaustive testing.

Effective equivalence class : For the set of input data that is reasonable and meaningful for the specification of the program, use the effective equivalence class to verify whether the program implements the functions and performance specified in the specification.

Invalid equivalence class : A set of requirements that do not meet the requirements according to the requirements specification.

supermarket buy fruit

Valid equivalence classes: apples, peaches, pears

Invalid equivalence classes: vegetables, rice, beverages, …

The equivalence class only considers the classification of the input domain, and does not consider the combination of the input domain.

2. Boundary value

Boundary value analysis is a black-box testing method that tests the boundary values ​​of input or output. Usually boundary value analysis is used as a supplement to equivalence class partitioning, in which case the test cases are derived from the equivalence class boundary.

3. Cause and effect diagrams

A cause-and-effect diagram is a simplified logic diagram that visually shows the interrelationships between program input conditions (causes) and output actions (effects). The cause and effect diagram method is a systematic method for designing test cases by means of graphs, and is especially suitable for various situations where the program under test has multiple input conditions and the output of the program depends on the input conditions.

(1) Identity: If the cause is true, then the effect must be true. For example: if the zoo brings giant pandas, the zoo must have giant pandas

(2) AND: Only 2 reasons are true, then the result is true

(3) Or: When one of the two causes is true, the result is true

(4) Not: The result is true only if the cause is false

4. Orthogonal arrangement

What if there are too many causal design use cases?

The purpose of the orthogonal method is to reduce the number of use cases. Cover input pairs with as few use cases as possible.

Orthogonal experimental design (Orthogonal experimental design) is a design method to study multi-factor and multi-level. Analysis of test results Understand the situation of the comprehensive test and find the optimal level combination. Orthogonal experimental design is a high-efficiency, fast and economical experiment based on the orthogonal table.

Factor: In an experiment, any variable to be examined is called a factor (variable)

Level (Level): Within the scope of the experiment, the value of the factor being investigated is called the level (the value of the variable)

The composition of the orthogonal table:

Number of rows (Runs): The number of rows in the orthogonal table, that is, the number of trials, represented by N.

Factors: The number of columns in the orthogonal table, represented by C.

Levels: The maximum number of values ​​any single factor can take. The values ​​contained in the quadrature table are from 0 to "number of levels-1" or from 1 to "number of levels", represented by T.

Orthogonal table representation: L=number of rows(number of levels*number of factors) L=N(TC)

Two properties of orthogonal tables:

Each number appears the same number of times in each column.

Every pair of ordinal numbers formed by any two columns occurs the same number of times,

The steps of designing test cases by orthogonal method:

1. What are the factors (variables)

2. What are the levels of each factor (the value of the variable)

3. Choose a suitable quadrature table

4. Map the value of the variable to the table

5. Use the combination of each factor level in each row as a test case

6. Add combinations of use cases that you consider suspicious and do not appear in the table

The purpose of the orthogonal method is to reduce the number of use cases. Cover input pairs with as few use cases as possible.

5. Scene design method

Almost all software today uses event triggering to control the process. The scene when the event is triggered forms the scene, and the different triggering sequences and processing results of the same event form the event flow. This method can vividly describe the situation when the event is triggered, which is helpful for the test designer to design the test case, and the test case is easier to understand and execute.

A typical application is to use business flow to string together various isolated function points to establish an overall business sense for testers, so as to avoid the wrong tendency to fall into the functional details and ignore the main points of the business process.

Designing a use case by imagining a registration scenario is similar to designing a business flow based on requirements. The main thing is to imagine various business flows to design use cases. For example, we can imagine the following scenario:

1. After the user activates, click the email activation link again?

2. Do registered users register again?

6. Bad guessing

Error guessing is a testing method that experienced testers like to use.

Based on experience and intuition, find out the errors you think may occur in the program, and design test cases in a targeted manner. Experience may come from working on a business

There are many tests, and it can also come from feedback from after-sales users, or to sort out bugs from the fault management library. Sort out where the product has been prone to problems in the past. The more problems there are, the more potential bugs there are.

1. Testers have a long time to test the project, understand the complexity of functions and modules, and understand the coding ability of developers

2. User feedback (online, offline)

3. Defect library fault library

Defect: failure before launch: after launch, production environment

2. Demand Q&A evaluation stage

participants:

Product, development, testing, requester, other relevant personnel

Main work content:

Review the product requirements document, and discuss and communicate where there are doubts or errors to ensure the accuracy and consistency of understanding the requirements. Through this meeting, we learned about the corresponding developers of each module, so as to determine the software. testing time

3. Develop a test plan

The main content of the test:

Develop a test plan according to the development plan. The test plan includes: test objectives, passing standards, test manpower arrangements, test schedule arrangements, etc.

4. Writing and reviewing test cases

Main content: The most important part of the testing work is to design and produce test cases.

(Refer to the writing process of a test case)

5. Smoke test

After development and testing, before formal testing, it is necessary to verify whether there are problems with the main process or the main implementation functions.

If there is no problem, then carry out the system test to avoid the situation that the test-related work is ready to be carried out, but the core business cannot be carried out.

If there is a problem during the registration and login step or when placing an order for payment, the subsequent functions cannot be tested, so this is called a smoke test.

The main thing is to ensure that the main process can be tested without any problems. If the smoke test cannot be run through, let the developer do a new version and test again.

6. Execute test cases

After the smoke test, carry out the full-scale test according to the test plan. Submit discovered bugs for developers to fix, and test them again after fixing (regression verification)

Seven, test report summary

A summary after the entire requirement or version has been tested. It mainly reflects the problems in the testing process and the quality of the corresponding version, whether it meets the release standards, the situation of the remaining problems, whether it affects the relevant use, and special precautions.

(Refer to the test work flow chart---review summary)

Eight, non-characteristic test:

1. Load pressure:

Refers to the traffic that the system bears under a certain specified software, hardware and network environment, such as the number of concurrent users, continuous running time, data volume, etc. The number of concurrent users is an important manifestation of load pressure.

2. Load stress test:

It refers to the number of concurrent users, running time, and data that the system can withstand under certain test constraints to determine the maximum load pressure that the system can withstand. Load stress testing is an important part of performance testing.

3. Performance test:

It is used to ensure that the performance of the system can meet the needs of users after the product is released, including two test strategies: performance evaluation and performance tuning (preliminarily simulate user visual judgment page loading)

4. Load test:

Test the changes in system performance by gradually increasing the system load, and finally determine the maximum load that the system can withstand under the condition that the performance indicators are met

5. Stress test:

By gradually increasing the system load, testing the changes in system performance, and finally determining under what load conditions, the system performance is in a state of failure, and using this to obtain the maximum service level that the system can provide. The test stress test is to find out under what circumstances System performance becomes unacceptable

6. Concurrency performance test:

The process of concurrent performance testing is a process of load testing and stress testing; gradually increase the load of concurrent users until the bottleneck of the system or an unacceptable performance point, and the concurrent performance of the system can be determined by comprehensively analyzing transaction execution indicators and resource monitoring indicators the process of;

        Concurrency performance testing is an important part of load stress testing;

        The concurrent performance test includes: the performance test of the application on the client side, the performance test of the application on the network, and the performance test of the application on the server side. The fatigue strength test adopts the maximum number of concurrent users that the system can support under stable operation. Or the number of daily running users, continuous execution of business for a period of time, to ensure the business volume that meets the system fatigue strength requirements, and through comprehensive analysis of transaction execution indicators and resource monitoring indicators to determine the process of the system processing the maximum workload intensity performance

7. Large data volume test:

Large data volume testing includes independent data volume testing and comprehensive data volume testing.

8. Independent data volume test:

Refers to the comprehensive data volume of the large data volume test for certain system storage, transmission, statistics, query and other services: refers to the pressure performance test, load performance test, fatigue test

9. Defect management

Definition of bug: The software does not meet the functions indicated in the requirements presentation;

The software exhibits inconsistent performance in the requirements presentation;

The software function is beyond the scope of the requirements presentation;

The software does not meet the user's expectations;

Testers or users perceive the software to be poor in ease of use.

(Roughly based on three distinctions: does not meet the needs, the program itself is wrong, does not meet the user's habits)

The life cycle of bugs: from discovery to resolution

bug description environment information: operating system/database/browser/software version

Function module, test/developer, severity level (1-5), customer priority, risk level, status, reproduction steps, actual results, whether to return to the problem, etc.

Bug severity level:

Fatal: The system cannot run, the main functional modules cannot be used, cannot be started or exits abnormally, and the system cannot be logged in.

High: The main function is defective, but it does not affect the stability of the system, the function is not implemented, there is an error report, and there is a calculation error.

Medium: Interface, performance defects, no response during big data operations.

Low: problems with ease of use and advice, poor interface color matching, typos, uneven text, etc.

Repair grade classification:

Guess you like

Origin blog.csdn.net/weixin_46658581/article/details/123582893