The most detailed basic knowledge of software testing in the whole network

1. Overview of software testing

1. Software defects

Software defect: also known as "Bug". A problem, bug, or hidden functional defect in computer software or a program that impairs its ability to function properly.

Manifestations of defects

The software does not realize the functional modules required by the product specification; there are
errors in the software that the product specification indicates should not occur;
the software realizes the functional requirements not mentioned in the product specification;
the software does not realize that although the product specification does not clearly mention and what it should achieve;
the software is difficult to understand, difficult to use, slow to run, and user-unfriendly;

Causes of software defects

Unclear requirements;
complex system structure;
incomplete consideration of program logic path or data range;
ensuring accurate synchronization of design time;
hidden systemic and reliability problems;
complex operating environment of the system;
many communication ports affect the system Safety, applicability;
design technical system compatibility issues;

properties of defects

Defect ID: Unique ID;
Defect Type: Defect Type;
Defect Severity: Refers to the degree of impact of the fault caused by the defect on the software product;
Defect Priority: Refers to the urgency that the defect must be repaired;
Defect Status: Through a tracking repair process Defect Origin
: The stage at which the failure or event caused by the defect is first detected;
Defect Source: The cause of the defect;
Defect Root Cause: Anyway, the root cause of the error;

2. Definition and principles of software testing

Definition: Software testing is the process of executing a program or system in order to find bugs.

in principle:

Testing reveals the existence of bugs:
Exhaustive testing is not possible;
test as early as possible;
defect clusters: (the 80/20 rule: about 80% of problems are found in 20% of modules); the
pesticide paradox;
testing is context sensitive There
are no fallacies;
software testing is a risky behavior;

At the same time, I also prepared a software testing video tutorial, which is placed at the end of the article. If you need it, you can watch it directly, or directly click on the small card at the end of the article to get the information document for free

2. Software testing process and strategy

1. Overview of software testing strategy

A software testing strategy is a software testing template for the software engineering process, that is, a series of steps into which specific test case methods are placed:

Features of software testing

Testing starts at the module level and then expands to the entire collection of computer-based systems;
different testing techniques apply at different points in time;
testing is managed by developers and independent test groups;
different activities during testing and debugging, But debugging must be able to adapt to any testing strategy;

Software Testing Sufficiency Guidelines

There is a finite set of sufficient tests for any software;
if a software system is adequately tested on a set of test data, then some more test data should be sufficient;
even if all components of the software are fully tested , nor does it mean that the testing of the entire software is sufficient;
even if the testing of the software system as a whole is sufficient, it does not mean that each component of the software system has been fully tested;
the adequacy of software testing is related to the Requirements and software implementation are both related;
the more complex the software, the more test data it needs;
the more it is tested, the less growth in adequacy it can get from further testing;

2. Classification of software testing

software development stages

1) Unit testing:
refers to the inspection and verification of the smallest testable unit in the software. Unit testing needs to design test cases based on the internal structure of the software. Multiple modules can be tested independently.

2) Integration test:
assembly test/joint test: assemble all modules into subsystems or systems according to design requirements for integration test.

3) System test:
combine the confirmed software, computer hardware, peripherals, network and other elements together to conduct various assembly tests and confirmation tests of the information system. The system test is a test for the entire product.

4) Acceptance Testing:
Delivery Testing: Make sure the software is ready.

Division of Test Technology

1) White box testing:
structural testing / transparent box testing / logic-driven testing / code-based testing:

2) Black box testing:
Functional testing: Pass testing whether each function can be used normally. (input data/output data)

3) Gray-box testing:
A testing method between white-box testing and black-box testing: it not only pays attention to the correctness of output and input, but also pays attention to the internal situation of the program.

Whether the software under test actually runs

1) Static testing:
refers to checking the correctness of the program by analyzing or checking the syntax, structure, process, interface, etc. of the source program without running the program itself.

For code testing: mainly testing whether the code complies with the corresponding standards and specifications;
for interface testing: mainly testing whether the actual interface of the software is consistent with the description in the requirements;
for document testing: mainly testing whether the user and requirement description meet the actual needs of the user;
2) Dynamic method:
refers to checking the difference between the running result and the expected result by running the program under test, and analyzing the running efficiency, correctness, robustness and other performances.

Test Implementation Organization Division

1) Developer test:
Verification test/α test

2) User testing:
beta testing

3) Third-party testing

Test type division

1) Functional test:
mainly test the software according to the product requirements specification, and verify whether the software functions meet the requirements, including the inspection of the original functions and whether there are redundant functions or missing functions in the software.

2) Interface test:
mainly test the interface of the system, whether the user interface is friendly, whether the software is convenient and easy to use, whether the system design is reasonable, whether the interface location is correct, etc.

3) Performance test:
mainly to test whether the performance of the system meets the user's needs, that is, to verify the capability state of the system under specific operating conditions. Performance testing mainly uses automated testing tools to simulate normal, peak, and abnormal load conditions to test various performance indicators of the system.

4) Strength test:
Force the system to run under abnormal resource configuration. The purpose is to find errors caused by insufficient resources or resource contention.

5) Stress test:
mainly in an overloaded environment to test whether the system can operate normally.

6) Security test:
test the ability of the system to prevent illegal intrusion.

7) Compatibility test:
test the compatibility of software products on different platforms, different tool software or different versions of the same tool software.

8) Installation test:
mainly verify whether the software can be installed correctly, whether the settings of the installation file are valid, whether the installation affects the entire computer system, whether the software can be uninstalled cleanly when uninstalling the software, and whether the entire computer system is affected after uninstalling the software.

9) Documentation Test:
Mainly check the clarity and accuracy of internal or external documentation.

3. Software testing process model

3.1V model

insert image description here


3.2W model

insert image description here


3.3H model

insert image description here


3.4X model

insert image description here


4. Definition and characteristics of test cases

4.1 Characteristics of Test Cases

1. The test case is representative: the test case can represent and cover various legal and illegal, reasonable and unreasonable, boundary and transboundary and limit input data, operation and environment settings, etc.

2. The test results are decidable: the correctness of the test execution results can be determined, and each test case should have a clear expected result, otherwise it will be difficult to judge whether the system is running normally.

3. The test results can be reproduced: for the same test cases, the execution results of the system should be the same.

4.2 Principles of test case design

Use several test case design methods to design;
ensure the correctness of test case data and correct operation;
ensure that test cases are representative;
each test case should be aimed at a single test item;
ensure that the test results are OK Determined and reproducible;
ensure that the test case description is accurate, clear, and specific;
the test case design should meet the time, personnel and funding requirements of the project;

4.3 Test case template

4.3.1 Basic elements of test cases


4.3.2 Functional Test Cases


4.3.3 Performance test cases

1. Expected performance test cases


2. User Concurrency Performance Test Cases


3. Large data volume performance test cases


4. Fatigue strength test case


5. Load test cases

4.3.4 Compatibility Test Cases

3. Black box testing

1. Equivalence class division method

1. Division of effective equivalence classes: Effective equivalence classes refer to the set of input data that is reasonable and meaningful for program specifications. The effective equivalence class data set includes: commands entered by the end user, system prompts interacting with the end user, receiving the name of the relevant user file, providing initialization values ​​and boundary values, providing formatted output data commands, and providing data, data echoed on failure, etc.

2. Division of invalid equivalence classes: Invalid equivalence classes refer to input data sets that are unreasonable and meaningless for software specifications.

3. The method of equivalence class division

Divide by interval
Divide by numerical value
Divide by numerical set Divide
by restriction or planning Divide
by processing method

4. The principle of equivalence class division

1. In the case of the range of values ​​or the number of values ​​specified by the input conditions, one valid equivalence class and two invalid equivalence classes can be determined;
2. In a set of values ​​that specify the input data (assuming that there are n values), you can determine n valid equivalence classes and one invalid equivalence class;
3. In the case of specifying the rules that the input data must comply with, you can determine one valid equivalence class and several invalid equivalence classes;
4. Under the condition that the input condition stipulates the set of input values ​​or stipulates "how must be", a valid equivalence class and an invalid equivalence class can be determined; 5. Each element in the
determined equivalence class is processed in the program In the case of different methods, the equivalence class should be further divided into smaller equivalence classes;

5. Weak general equivalence class test: implemented by using a variable of each equivalence class (interval) in a test case

6. Strong general equivalence class testing: based on multiple defect assumptions

7. Weakly robust equivalence class test:

8. Strong and robust equivalence class testing:

9. Unit practice

2. Boundary value method

2.1 Boundary value analysis

Boundary value analysis is a black box testing method for testing the boundary value of input or output. The basic idea of ​​boundary value analysis is to use variable values ​​at the minimum value, slightly above the minimum value, normal value, slightly below the maximum value, and maximum value.

2.2 Robustness analysis

2.3 Worst case test

2.4 Unit practice

2.5 Random testing

2.6 Guidelines for Boundary Value Testing

3. Decision table method

3.1 Decision table

3.2 Examples

3.3 Guidelines

4. Cause and Effect Diagram

The cause-and-effect diagram is a method for designing test cases by graphically analyzing various combinations of inputs. It is suitable for checking various combinations of program input conditions.

5. Scene method

6. Orthogonal experimental method

4. White box testing (subsequent supplement)

A little supplementary explanation

Finally, I compiled a set of software testing interview documents for you. There are 212 pages in total. It should be very helpful for friends to change jobs for interviews, get promoted and raise salaries, get rid of professional difficulties, and improve their skills. I hope everyone can have a bright future. Kam. [Click on the small card at the end of the article to get a full set of software testing materials for free]

Basically covers all the core technical points of software testing: testing theory, Linux foundation, MySQL foundation, Web testing, interface testing, App testing, management tools, Selenium related, performance testing, computer network, composition principle, data structure and algorithm, logic Questions, human resources, technical brain maps, etc... the quality is very high! ! ! More than enough for technical interviews!

Where to watch the video tutorial:

A very perverted but allows you to quickly master automated testing (Web/interface automation/APP/Selenium/performance testing, etc.) Automation/APP/Selenium/Performance testing, etc.) magical methods totaling 100 videos, including: [Automated testing] How to design interface test cases? 、【Automated Testing】Automated testing framework design ideas, a full set of software testing materials and learning routes, etc., for more exciting videos from the UP master, please pay attention to the UP account. https://www.bilibili.com/video/BV1hj411z71j/?spm_id_from=333.999.0.0&vd_source=74d0257ec7066cc4f9013524f0bb7013

 

 

Guess you like

Origin blog.csdn.net/HUA1211/article/details/132208309