Explanation of Common Terms and Terms in Software Testing

1. Unit testing (unit testing): Refers to the basic test of a piece of code, its actual size is undetermined, usually a function or subroutine, generally executed by the developer.

2. Integration testing (integration testing): All components of the system under test are integrated together to find errors in the relationships and interfaces between the components of the system under test. This testing is generally performed after unit testing.

3. Acceptance testing: A phase of a system development lifecycle methodology when the system is tested and accepted according to the test plan and results by relevant users and/or independent testers. It lets the system user decide whether to accept the system. It is a test to determine whether the product can meet the requirements specified by the contract or the user. These are administrative and defensive controls.

4. Alpha testing (A test): It is a test conducted by a user in a development environment, or it can be a controlled test conducted by users within the company in a simulated actual operating environment. Alpha testing cannot be completed by programmers and testers . Alpha testing can start after the coding of the software product is completed, or after the module (subsystem) test is completed, or after the product has reached a certain degree of stability and reliability during the confirmation test. Relevant manuals (drafts) etc. should be prepared before Alpha testing.

5. Beta testing:   It is a test conducted by multiple users of the software in the actual use environment of one or more users. The developer is usually not on the test site, and the Beta test cannot be completed by programmers or testers. Therefore, Beta testing is the live application of software in an environment beyond the control of the developer. The Beta test focuses on the supportability of the product, and the Beta test can only be started after the Alpha test reaches a certain degree of reliability.

6. Black box testing (Black box testing):  Refers to a test method in which the tester does not care about how the program is implemented. According to the software specifications, the software is tested for various inputs and various output results of the software to find software defects. This type of test does not consider the internal operating principles of the software, so the software is like a black box to users.

7. White box testing (white box testing):  testing based on the analysis of the internal working principle of the software, code-based testing, the tester judges the quality of the software by reading the program code or using the single-step debugging in the development tool, generally White box testing is implemented by the project manager in program development.

8. Automated testing (automated testing):  Use automated testing tools for testing. This type of testing generally does not require human intervention, and is usually used more in GUI, performance, and other tests.

9. Bug tracking system (bug tracking system, BTS):  also known as "Defect tracking system, DTS", a dedicated database system for managing software testing defects, which can efficiently complete the reporting, verification, modification, query, and statistics of software defects , storage and other tasks. Especially suitable for test management of large multilingual software.

10. Build (working version): In the process of software development, the function and performance of the user's internal test are imperfect software versions. A working version can be either an operational version of the system or a part of the system that demonstrates part of the functionality to be provided in the final product.

11. Functional testing (functional testing):  also known as Behavioral testing (behavioral testing), based on product characteristics, operation descriptions and user scenarios, test the characteristics and operational behavior of a product to determine that they meet the design requirements. Functional testing of localized software to verify that an application or website works correctly for the intended users. Use appropriate platforms, browsers, and test scripts to guarantee that the target user's experience will be good enough as if the app was developed specifically for that market.

12. Load testing (load testing):  By testing the performance of the system in the case of resource overload, to find design errors or verify the load capacity of the system. In this test, the test object will be subjected to different workloads to evaluate and evaluate the performance behavior of the test object under different workload conditions, as well as the ability to continue normal operation. The goal of load testing is to determine and ensure that the system will function properly beyond the maximum expected workload. In addition, Load testing also evaluates performance characteristics such as: response time, transaction processing rate and other time-related aspects.

13. Performance testing (performance testing):  A test to evaluate whether a product or component meets performance requirements. Including load test, strength test, database capacity test, benchmark test and other types.

14. Pilot testing (guided testing):  In software development, to verify the ability of the system to handle typical operations on the basis of real hardware and customers. In software outsourcing testing, guided testing is usually a form for customers to check the testing capabilities of software testing companies. Only after passing the room-specific guided testing can software testing companies accept software testing for customers' real software projects.

15. Portability testing (or portability testing):  Test whether the software can be successfully ported to the specified hardware or software platform.

16. Compatibility Testing (compatibility testing):  also known as "Configuration testing (configuration testing)", testing whether the software is compatible with other elements of the system that interact with it, such as browsers, operating systems, hardware, etc. Verify the operation of test objects in different software and hardware configurations.

17. Installing Testing (installation testing):  to ensure that the software can be installed under different conditions of normal and abnormal conditions, for example, for first-time installation, upgrade, complete or customized installation. Exceptions include insufficient disk space, lack of directory creation permissions, and so on. Verify that the software is functional immediately after installation. Installing Testing includes testing the installation code and installation manual. The installation manual provides how to install, and the installation code provides the basic data to install some programs that can run.

18. Smoke Testing (smoke test):  The object of the smoke test is each newly compiled software version that requires formal testing. The purpose is to confirm that the basic functions of the software are normal, and subsequent formal testing can be carried out. The executor of the smoke test is the version compiler. See "Sanity Testing"

19. SanityTesting (sanity testing):  A simple test of the main functional components of the software to ensure that it can perform basic tests.

20. Regression Testing (Regression Testing):  Retest the previous test after the modification occurs to ensure the correctness of the modification. In theory, for any new version of software, regression testing is required to verify whether previously discovered and fixed bugs are reproduced on the new software version.

21. Priority (priority):  From a business point of view, it refers to the importance of errors, especially from the perspective of customers and users, and refers to the impact of errors on the feasibility and acceptability of the system. Contrast with Severity.

22. Severity (severity):  The degree to which the error affects the system under test, the likelihood of occurrence under end-user conditions, and the degree to which software errors prevent the use of the system.

23. Software life cycle (software life cycle):  Begins with the conception of a software product and ends with the period when the product is no longer in use.

24.SRS (Software Requirement Specification): Software Requirement Specification

Questions:
1. The difference between UAT and SIT
UAT (User Acceptance Testing): End-user integration testing, mainly requires users to participate in the testing process, and to get the user's approval of the software, and encourages users to carry out test design and destructive testing by themselves. To fully expose the design and function problems of the system, obviously, user approval and destructive testing are difficult points. Because testers do not know what methods and thinking patterns users use to test; UAT is a bit similar to user experience
SIT (System Integration Testing)  : system integration testing is similar to software module integration testing, but there are few opportunities for users to participate , mainly carried out within the company;

Guess you like

Origin blog.csdn.net/xiao1542/article/details/130513782