[Test Development] Section 4. Test Development (Test Classification)

About the author: Hello everyone, I am Weiyang;

Blog Homepage: Weiyang.303

Series column: Java test development

Daily sentence: There is only one time in a person's life to make a difference, and that is now! ! !


 

foreword

Today we will learn about the partitioning of test cases;


1. Review of test cases

Universal test case design formula:


How to design test cases according to requirements?

1. Verify the correctness, rationality, non-ambiguity (no ambiguity) and self-consistent logic of the requirements.

2. Analyze the requirements, refine the requirements, extract the test items from the requirements, find the test points according to the test items, and then design the test cases according to the test points.


What are the two aspects of designing test cases according to requirements?

1. Functionality

1.1. All functions of the interface must not be missed (skill: scan from top to bottom, from left to right, layer by layer)
  1.2. Connect multiple functions in series to form a scene/business, and test the scene/business.
  1.3. Test for multiple inputs of a function, and observe whether the output is consistent with the expected result.
  1.4. Under the same system, the interactivity of different functions.
  1.5. The abnormal test of functions
  . 1.6. The algorithm used by the functions also needs to be verified.


2. Non-functional
usability, fault tolerance, security, performance, maintainability, reliability, portability, stability, compatibility, etc.;

Notice:

For different types of software, the focus of non-functional testing is different.

Client-oriented software: High requirements for stability and compatibility, low requirements for security and performance.
Enterprise-oriented software: High requirements on functionality and reliability, low requirements on compatibility and performance.
Large-scale commercial software: the requirements for all aspects of non-functionality are high!


What are the specific methods for designing and testing?

1. Equivalence class [valid equivalence, invalid equivalence]

2. Boundary value

3. Judgment table (cause and effect diagram, less usage scenarios)

4. Scene design method (uncommon)

5. Orthogonal method (rarely used)
6. Error guessing method 


Second, the division of test cases


 2.1 Divide according to test objects

 Let's explain some of the professional terms mentioned above

2.1.1 Reliability test

Reliability = uptime / (uptime + abnormal uptime) * 100%
For example, a class is 45 minutes, during which you go to the toilet for 5 minutes and run for 10 minutes, and the actual time for listening to the class is 30 minutes.
Then the reliability of your classroom attendance is: 30 / (30+5+10) * 100% = 66.7%

Factors Affecting Reliability:

1. Network
2. Software environment (installation environment)
3. Hardware environment

Regardless of the abnormality of the above environment, it will make the software run abnormally.


The software itself:

There is a problem here: if there is no problem with the software itself, but there is a problem with the environment (network/software/hardware) where the software is deployed, making the software unable to run normally.
Is this time included in the abnormal running time of the software?
That is to ask you: Is this time counted in the reliability?
This needs to be discussed on a case-by-case basis!

In general: The above four points all refer to the fact that the problems on the server side will lead to a decrease in the reliability of the software.

Let's put it this way: if you and other people can't use WeChat, there must be a problem with the WeChat server.
And this abnormal operation time needs to be included in the reliability.

In addition, as mentioned earlier: Different software has different non-functional requirements.
That is to say: Different software has different requirements for reliability.

The requirement of untimely software reliability is generally 99.99%. [Real-time performance is 99.95%]
 


In addition, there are some special software (military systems) that have higher requirements for reliability: 99.999%
if calculated on a 365-day basis:
99.99% reliability: its abnormal running time is about 52 minutes
if it is 99.999 %, then 5 minutes;


So, how to test the reliability?

Usually, the software is run for a week, and the failure time is recorded to calculate the percentage.
[PS: It is definitely unrealistic to test the reliability and let the software run for a year]


2.1.2 Fault tolerance test

When an exception occurs in the system, or an error occurs in the software system due to the wrong operation of the user, the software digests the error by itself, or modifies/repairs it, so that the customer does not perceive the internal situation of the system, which is called the fault tolerance of the system.

The difference between fault tolerance and reliability:


2.1.3 Memory leak test

At work:
1) Manual inspection;
2) Static code scanning with the help of tools;


2.1.4 Weak network test

 So how to calculate the uplink and downlink rate?


2.2 According to whether to view the code division

Black-box testing
is purely functional testing and does not care about how it is implemented;
white-box testing
focuses on the internal implementation of the program (unit testing)
gray-box testing
is between black-box and white-box (inheritance testing)


Why can't gray box testing (integration testing) replace black box testing and white box testing?

Gray box testing is not as detailed as white box testing, and gray box testing does not cover as much product as black box testing.
Therefore, gray box testing (integration testing) cannot be used to replace black box testing and white box testing;


How many test methods are used for interview questions ?

Both black-box testing and white-box testing are used by testers, and
black-box testing and white-box testing are combined in the work according to the specific situation.
Usually, for testers: it is relatively more to use black box testing.


2.3 Divided by development stage

 Smoke test:

After the developer completes the development task, hand it over to the tester for the first step of testing, assessing whether the software/system meets the conditions for testing 

Regression Testing:


Summarize

 

Guess you like

Origin blog.csdn.net/qq_64861334/article/details/130367589