Proficient in software performance testing and LoadRunner best practice articles, you will regret it if you don’t see it

Classification of software testing
Proficient in software performance testing and LoadRunner best practice
Software testing can be used to describe various classifications of software testing according to the testing phase, whether to run the program, whether to view the source code, and other methods.

Black box testing, white box testing and gray box testing
1. Black box testing
black box testing (Black-box Testing) method is one of the software testing, functional testing may also be referred to as data driving test or tests based specification. The tester does not know the internal situation of the program, but only knows the program's input, output and system functions. This is a test of the program from the user's perspective. Black-box testing of software means that testing must be performed at the interface of the software. This method regards the test object as a black box, and the tester does not consider the logic structure and internal characteristics of the program at all, and only checks whether the function of the program meets its functional description based on the program's requirements specification.

The randomness of black box testing is relatively large. After the execution of most cases, about 40% of the functions can be tested. According to an official U.S. data, 20% of the problems are found during the development process; 80% of the problems are found during the system test and integration test, 80% of which we still need to subdivide, 20% The problem is usage, 20% is program problem, 5% is logic problem, and the rest are inexplicable problems. One guide for testing from such data is: To find more problems, you need more thinking and more combinations. This has increased a lot of work, and people are exhaustedly executing test cases, eager to discover new problems.

This kind of use case design philosophy makes us develop a large-scale product or continuous product, the continuity of the entire test case is very poor, and the reusability is also very poor. So we need to correct a concept here. Black box testing is not simply used, and use case design is not useless combination.

So how to design a good test case? How to combine the 2/8 principle well in the development process? It is impossible to have a flawless product, but as a software engineer and software test engineer, I definitely hope that the products I participate in development are stable, easy to use and can be praised by users; I hope that the products I participate in can meet the needs of most people. Is it more reasonable? I believe that through the continuous efforts of software engineers, test engineers, and quality assurance personnel, our software products will satisfy users.

2. White box
white box (White-box Testing) is another major software testing method, also known as structural testing, based on the logical drive test or test procedure itself, it focuses on the internal structure and algorithm program, usually not Care about functions and performance indicators. White box testing of software is a detailed inspection of the procedural details of the software. This method regards the test object as an open box, which allows white box testers to use the logic structure and related information inside the program to design or select test cases to test all logic paths of the program. By checking the state of the program at different points, determine whether the actual state is consistent with the expected state.

White box testing is based on analyzing the control structure and processing process in the source code to check whether the internal processing of the program is correct, including exception handling, statement structure, branching, loop structure, etc. Many control software also need to consider whether there is redundant code, because when the program is running, it may enter these codes and can no longer perform normal execution (such as entering an infinite loop state, the program can never be terminated). This kind of test requires the tester to have a high level of program understanding and coding ability. They need to understand the structure of the program, specific requirements, and some programming skills, be able to check some program specifications, and pointers, variables, arrays out of bounds and other issues, so The problem was exposed in the early stage.

White box testing is generally based on units or modules. The current practice is to attribute it to the scope of development. Usually by senior programmers, full-time white box testers or using professional code analysis tools, such as Boundchecker, JtestC++ Test and other tools, these tools can help developers find variables that are not initialized, null pointers, memory leaks, and code irregularities. problem.

The main methods of white box testing include the following.

1. Statement coverage: each statement in the program can be executed at least once.
2. Judgment coverage: make each judgment in the program be true or false at least once.
3. Condition coverage: make each condition in the judgment obtain various possible results.
4. Judgment/condition coverage: both judgment coverage and condition coverage are met.
5. Condition combination coverage: Make all possible combinations of conditions in each judgment appear at least once.
3. Gray box testing
gray box testing (Gray-box Testing) is the external representation of the time while running the program for the logical structure design with internal embodiment, program execution and execution path information acquisition procedure and the results of testing techniques a user based on the external interface. This testing technique is between white box testing and black box testing. It can be understood that gray box testing focuses on the correctness of the output to the input, as well as the internal performance. But this kind of attention is not as detailed and complete as the white box. It only judges the internal operating state through some representative phenomena, events, and signs. Sometimes the output is correct, but the internal is actually wrong. There are many cases. , If the operation is performed through the white box test every time, the efficiency will be very low, so it is necessary to adopt such a gray box method.

Gray box testing combines the elements of white box testing and black box testing. It considers the user end, specific system knowledge and operating environment.

Gray box testing is composed of methods and tools. These methods and tools are based on the internal knowledge of the application and the environment in which it interacts. They can be used in black box testing to enhance the efficiency of testing, error discovery, and error analysis.

Gray-box testing involves input and output, but uses information about code and program operations that are usually outside the tester’s vision to design tests.

Static test and dynamic test

The concepts of static testing and dynamic testing are mentioned in many books, and I will introduce these two concepts here.

1. Static Testing The
so-called static testing refers to the process of statically checking the program code, interface or document for possible errors without running the software being tested.

From the concept, it is not difficult to find that static testing mainly includes three aspects of testing work, namely program code, interface and documentation.

(1) Program code testing: It is mainly used by programmers to check whether there are coding irregularities in the program, whether code writing is inconsistent with business implementation, and whether there are memory leaks, null pointers and other problems in the code through code inspection and code review. test.

(2) Interface test: It mainly refers to the tester from the user's point of view, according to the company's UI (User Interface, user interface) design specifications to check whether the interface of the tested software meets the user's requirements, here, I strongly agree with the development of software products Before, provide an interface prototype for users to refer to, listen to users’ opinions, and then continuously improve the prototype, and finally implement the software according to the adopted prototype.

(3) Document testing: It is mainly the process of checking whether the requirements specifications and user manuals meet the requirements of users.

In order to be able to explain how the static test is carried out, we will now just take the program code test as an example to introduce to you.

First of all, please take a look at a small program implemented in C language. The code is as follows:
Insert picture description hereI wonder if you see the problem? If you have a certain understanding of the C language, you should know that after the memory application is completed, after the task is completed, the requested memory must be reclaimed, otherwise it will cause memory leakage. From the above code, it is not difficult to find that every time the msg() function is applied, 100 bytes of memory will be leaked. In the case of sufficient memory, one or two leaks are trivial, but after several hours of continuous operation, especially in the case of multi-user concurrent operation, after a period of continuous operation, even such a small leak will weaken the application Processing power, the final result will be exhausted memory resources!

In actual C and C++ programming, after you malloc() (allocate) the memory in your code, you must remember to release that part of the requested memory through free() after completing the task. Of course, you also need to pay attention to the application When operating a file, you must also close the file. After establishing a connection, you must close the connection. In the above-mentioned situation, if the requested resource is not closed in time, memory leakage will also occur.

In addition to the code issues mentioned above, the lack of comment information in the documentation is also a problem. Everyone knows that in the process of writing a software, multiple people usually collaborate with each other. Each person writes a part of the functional module. Everyone may be very clear about the content of the module written by himself, but sometimes it is inevitable that you will be encountered when you modify the code of others. (For example, if a R&D employee leaves, you need to maintain the part of the code written by him). If there is no comment information, it is extremely difficult for you to understand hundreds of thousands or millions of lines of code, but when there are comments In this case, you can quickly understand the author's intentions and facilitate the maintenance of the later code.

2. Dynamic testing
corresponds to static testing is dynamic testing. The so-called dynamic testing (Dynamic Testing) refers to the process of actually running the tested software, inputting the corresponding test data, and checking whether the actual output results are consistent with the expected results. From the concepts of static testing and dynamic testing, it is not difficult to find that the only difference between static testing and dynamic testing is whether to run the program.

In order to be able to illustrate how the dynamic test is carried out, we will now give you a specific example. Take the calculator program that comes with Windows as an example, if we enter "5+50=", when designing the use case, the expected result should be ", if the result is not equal", the program is wrong, please refer to Figure 1-2 .
Insert picture description here
Unit test, integration test, system test and acceptance test
1. Unit testing
Unit testing is the smallest granularity in the testing process. It tests the product functions and modules closely in accordance with the program framework during execution, including entry and exit parameters, input and output information, error handling information, and partial boundaries. Numerical test.

This part of the testing work is currently carried out by developers in most cases in China. I believe that the future development should be the test engineer to do this. This is closely related to the initial stage of domestic software testing. With the vigorous development of the software industry, more and more software companies have realized the importance of white box testing, especially in military, aerospace, and other The importance of white box testing is self-evident in projects that have a significant impact on personal and property safety. Of course, such a matter of great significance also puts forward higher requirements on the comprehensive capabilities of white box testers. Practitioners must have a deep understanding of requirements, system frameworks, codes, and testing techniques in order to discover problems.

There is also a way for everyone to sit together and discuss the review. When a module is given to a certain development engineer, he needs to explain it to everyone. He needs to complete the overall process and ideas of the module or function, and conduct a unified review so that the problem can be exposed. More fully, there are several reasons for this purpose. First, make everyone have a clear understanding of the designer's ideas, so that when you call or cooperate in the future, you can really put forward the needs or cooperate perfectly. Second, during the review process, if a problem is found, then you may not have encountered it, so you will be more vigilant. If you meet, you will recall how you solved or avoided it at the time, so that everyone can avoid the occurrence of errors. , To reduce the cycle of problem solving. Third, it is possible to accumulate common mistakes. This is a vivid textbook that allows new personnel to learn from the experience of their predecessors when they get started. After encountering such problems, they can give them a solution to the problem. Method or direction.

I have introduced two methods to you above. The first is through testing during the development process. Development (white box testing) engineers write test codes to test the written functions or modules; the second is through code Mutual evaluation finds problems, accumulates problems, and forms a knowledge accumulation library so that other developers will not make mistakes again when they encounter the same problem.

Unit testing is very important because the scope and width of its impact is relatively large. Perhaps due to a function or parameter problem, many appearance problems are exposed later. Moreover, if the unit testing is not done well, it will put a lot of pressure on integration testing or subsequent system testing, so that the cost and schedule of the project may be affected.

For unit testing, there are many tools that can be applied. Now the mainstream is the Xunit series (that is, Junit is mainly used for Java unit testing, Nunit for .Net, and Dunit for Delphi). Of course, in addition to the Xunit series of unit testing tools, there are also Other tools, such as Cppunit, Comunit, ParaSoft's Jtest, etc. Testers should continue to accumulate work experience in the unit testing work, continuously strengthen and improve their work methods, and enhance the strength of unit testing.

To ensure the smooth progress of unit testing, it is necessary to infiltrate many ideas of software engineering, establish CMM and tracking mechanism, and classify and track problems. If the entire activities of the software link are penetrated, the awareness of product quality will naturally increase.

What do unit tests do?

The main tasks of unit testing include:

A. Module interface test;

B. Module partial data structure test;

C. All independent execution path tests in the module;

D. Test each error processing path of the module;

E. Module boundary condition test.

(1) Module interface test.

Module interface testing is the basis of unit testing, mainly to check whether the data can pass the module correctly. Other tests are meaningful only if the data can flow into and out of the module correctly.

The following factors should be considered for the correctness of the test interface:

A. Whether the actual parameters entered are the same as the number of formal parameters;

B. Whether the input actual parameters match the attributes of the formal parameters;

C. Whether the dimensions of the input actual parameters and formal parameters are consistent;

D. Whether the number of actual parameters given when calling other modules is the same as the number of formal parameters of the called module;

E. Whether the properties of the actual parameters given when calling other modules match the formal parameter properties of the called module;

F. Whether the dimension of the actual parameter given when calling other modules is consistent with the dimension of the formal parameter of the modulated module;

G. Whether the number, attributes and order of parameters used when calling a predefined function are correct;

H. Whether there are parameter references irrelevant to the current entry point;

I. Whether the read-only parameter is modified;

Whether the definitions of the global variables are consistent in each module;

Whether to pass certain constraints as parameters.

If the module includes external input and output, the following factors should also be considered:

A. Whether the file attributes are correct;

B. Whether the opening or closing statement is correct;

C. Whether the format description matches the input and output sentences;

D. Whether the buffer size matches the record length;

E. Whether the file has been opened before use;

F. Whether the end of the file is processed;

.G whether the input/output errors are processed;

H. Whether there are textual errors in the output information.

(2) Partial data structure test.

Checking the local data structure is to ensure that the data temporarily stored in the module is complete and correct during program execution. Partial data structure is often the source of errors. Test cases should be carefully designed to find the following types of errors:

A. Inappropriate or incompatible type description;

B. Variables have no initial values;

C. The variable initialization or default value is wrong;

D. Incorrect variable name (misspelled or truncated incorrectly);

F. Overflow, underflow and address abnormality occur;

G. In addition to local data structures, if possible, unit testing should also find out the impact of global data (such as Fortran's common area) on the module.

(3) Perform path test independently.

Each independent execution path should be tested in the module. The basic task of unit testing is to ensure that each statement in the module is executed at least once. At this time, the test cases are designed to find errors caused by incorrect calculations, incorrect comparisons and inappropriate control flow. Basic path testing and loop testing are the most commonly used and most effective testing techniques. Common errors include:

A. Misunderstanding or using the wrong operator priority;

B. Mixed type operation;

C. The initial value of the variable is wrong;

D. Insufficient accuracy;

F. The expression symbol is wrong.

Comparison judgments and control flow are often closely related, and test cases should also focus on finding the following errors:

A. Compare objects of different data types;

B. Wrong use of logical operators or precedence;

C. Due to the limitations of computer representation, it is expected that two quantities that are equal in theory but not equal in practice are equal;

D. Errors in comparison operations or variables;

F. Loop termination conditions may not appear;

H. Modify the loop variable by mistake.

(4) Error handling path test.

A good design should be able to foresee various error conditions, and preset various error handling paths for these error conditions. The error handling paths also need to be carefully tested. The following issues should be checked during testing:

A. The output error message is difficult to understand;

B. The recorded error does not match the actual error encountered;

C. The system has been involved in processing before the program-defined error processing section runs;

D. Improper handling of exceptions, resulting in data inconsistencies, etc.;

E. The misstatement fails to provide enough information about the positioning error.

(5) Boundary condition test.

Boundary condition testing is an important task in unit testing. As we all know, software often fails on the boundary. Using boundary value analysis technology to design test cases for the boundary value and the left and right sides of the boundary value, new errors are likely to be discovered.

(6) Unit testing method.

It is generally believed that the unit test should be immediately after the coding, when the source program is compiled and passed the review and compilation check, the unit test can be started. The design of test cases should be combined with the review work. The selection of test data based on the design information will increase the possibility of finding the above-mentioned various errors. While determining the test case, the expected result should be given.

Because the tested module is often not an independent program, it is at a certain level of the entire software structure, and it is called by other modules or called other modules. It cannot run independently. Therefore, during unit testing, a driver should be developed for the test module. (Driver) module and (or) several stub modules. Figure 1-3 shows the general unit test environment.

Insert picture description hereThe function of the driver module is to simulate the upper-level calling module of the tested module. The function is much simpler than the real upper-level module. It receives test data and transfers these data to the tested module. After the tested module is called, it can be printed. "Enter-Exit" message. The stub module is used to replace the module called by the tested module to return the information required by the tested module.

Driver modules and stub modules are software used for testing, not part of software products, and their writing requires certain development costs. If the drive and pile modules are relatively simple, the actual development cost is relatively low. Unfortunately, only simple driver modules and stub modules cannot complete the testing tasks of certain modules. The unit testing of these modules can only use the integration testing methods discussed later.

2. Integration testing It
often happens that each module can work independently, but these modules cannot work normally after being integrated. The main reason is that the interface will introduce many new problems when the modules call each other. For example, data may be lost through the interface; one module may have an undue influence on another module; the combination of several sub-functions cannot achieve the main function; the error continues to accumulate, and finally, it reaches an unacceptable level; the global data structure has an error Wait. Integration testing is a system testing technology for assembling software. After assembling the modules that pass the unit test according to the design requirements, comprehensive testing is performed to find various errors related to the interface.

**Integration testing includes two different methods: non-incremental integration and incremental integration.
** R&D personnel are accustomed to assembling all modules at once according to the design requirements, and then performing overall testing, which is called non-incremental integration. This method is prone to confusion, because a lot of errors may be found during testing, and it is very difficult to locate and correct each error, and it may introduce new errors while correcting an error. New and old errors are mixed, and it is more difficult to determine the error. Reason and location. The opposite is the incremental integration method. The program is expanded section by section, the scope of the test is strengthened step by step, errors are easy to locate and correct, and the interface test can be completely thorough.

(1) Two types of incremental integration methods.

Incremental integration methods mainly include top-down integration and bottom-up integration.

Top-down incremental testing means that step-by-step integration and step-by-step testing are carried out from top to bottom according to the structure diagram. That is, the order of module integration is to integrate the main control module (main program) first, and then integrate down according to the software control hierarchy.

Bottom-up incremental testing starts from the lowest module, and gradually integrates and tests from bottom to top according to the structure diagram.

The integration test mainly tests the structure of the software. Because the test is built on the interface of the module, it is mostly black box testing, which is appropriately supplemented by white box testing.

Perform integration testing should follow the following methods:

A. Confirm the relationship between the modules that make up a complete system;

B. Review the interaction and communication requirements between the modules, and confirm the interfaces between the modules;

C. Use the above information to generate a set of test cases;

D. Adopt incremental testing, adding modules to the system in turn, and testing the newly merged system. This process is repeated in a logical/functional sequence until all modules are integrated into the system to form a complete system.

In addition, pay special attention to key modules during the test. The so-called key modules generally have one or more of the following characteristics:

A. Corresponding to several requirements;

B. With high-level control functions;

C. Complex and error-prone;

D. There are special performance requirements.

Because the main purpose of integration testing is to verify the interfaces and interactions of the various modules that make up the software system, the data requirements for integration testing are generally not very high in terms of difficulty and content. Integration testing generally does not use real data, and testers can use hand-made part of representative test data. When creating test data, you should ensure that the data adequately test the boundary conditions of the software system.

During unit testing, some test data is generated as needed, and these data can be reused appropriately during integration testing, which can save time and manpower.

(2) Principles followed by integration testing.

The integration test is very difficult to grasp, and should be planned for the overall design as soon as possible. In order to do a good job in integration testing, the following principles need to be followed:

A. All public interfaces must be tested;

B. The key modules must be fully tested;

C. Integration testing should be carried out at a certain level;

D. The strategic choice of integration testing should consider the relationship between quality, cost and schedule;

E. Integration testing should start as early as possible and be based on the overall design;

F. In the division of modules and interfaces, testers should communicate fully with developers;

G. When the interface is modified, the related interface involved must be retested;

H. The test execution results should be truthfully recorded.

3. System Test After the
integration test is passed, the software has been assembled into a complete software package, and then the system test is required. System testing completely adopts black box testing technology, because at this time there is no need to consider the implementation details of component modules, but mainly to check whether the software meets the requirements of functions, performance, etc. according to the standards determined during the requirements analysis. The data used for system testing must be as accurate and representative as the real data, and must be as large and complex as the real data. One way to meet the above test data requirements is to use real data. In the case of not using real data, a copy of real data should be considered. The quality, accuracy and data volume of copied data must represent the real data as much as possible. When using real data or using real data replication, it is still necessary to introduce some manual data. When creating manual data, testers must adopt formal design techniques to make the provided data truly represent normal and abnormal test data and ensure that the software system can be fully tested.

System testing requires a wide range of knowledge, and the requirements of test engineers need to understand and master many aspects of knowledge, and need to understand the possible causes of the problem, and what causes the problem may have occurred, so that we can supplement the test in time Use cases to ensure or reduce the risk after the product is released.

The system test stage is the main stage where problems are found in the test, and the repeated workload of the system test is relatively large. If it is a large-scale project, the content involved is relatively more. Testing itself is a repetitive work. Many times we have to deploy the same test environment, test the same module functions, and input the same test data over and over again. This is very boring and tedious work. It's easy to get bored, so if we can use automated testing tools to perform part of the regular repetitive work, it will reduce our workload and improve work efficiency.

4. Acceptance test After the
system test is completed, the software has been fully assembled, and the interface errors have been eliminated. At this time, the final confirmation test of the software can be started. The confirmation test mainly checks whether the software can work according to the contract requirements, that is, whether it meets the requirements in the software requirements specification.

Software validation requires a series of black box tests. Confirmation testing also requires the development of a test plan and process. The test plan should specify the type of test and the test schedule. The test process defines some special test cases to illustrate whether the software is consistent with the requirements. Regardless of the plan or the process, it should focus on whether the software meets all the functions and performance specified in the contract, whether the documentation is complete and accurate, the man-machine interface and other aspects (for example, portability, compatibility, error recovery capability and maintainability) Sex, etc.) to meet customer requirements.

There are two possibilities for confirming the results of the test: one is that the functions and performance indicators meet the requirements of the software requirement specification, and the user can accept it; the other is that the software does not meet the requirements of the software requirement specification and the user cannot accept it. At this stage of the project, it is discovered that serious errors and deviations are generally difficult to correct within the scheduled construction period. Therefore, it is necessary to negotiate with the user to find a proper solution to the problem.

In fact, it is impossible for a software developer to fully predict the actual use of the program by the user. For example, the user may misunderstand the command, or provide some strange data combination, or may be confused by the prompt information given by the system. Therefore, whether the software truly meets the requirements of the end user, the user should conduct a series of "acceptance tests." Acceptance testing can be either an informal test or a planned and systematic test. Sometimes, acceptance tests last for weeks or even months, constantly exposing errors and delaying development. A software product may have many users, and it is impossible for each user to check and accept it. At this time, a process called testing is often used to find problems that only the end users can find.

Testing refers to the software development company’s organization of internal personnel to simulate various user behaviors to test the software products (called versions) that will be released on the market, trying to find errors and correct them. The key to testing is to simulate as much as possible the actual operating environment and the user's operation of the software product, and try its best to cover all possible user operations. Software products that have been tested and adjusted are called versions. The following tests refer to typical users in various aspects of the software development company organization (for example, put it on the Internet for users to download for free, and can try it for a certain period of time, or distribute it in the form of CDs to some future potential customers who are looking forward to the trial. Users can also use a certain period of time. This period may usually be a few days or a few months.) Actually use the __ version in daily work, and ask the user to report abnormal situations and put forward suggestions for improvement, and then the software development company will respond The version is corrected and improved.

Other tests
1. Regression testing
Whether it is black box testing or white box testing, regression testing is involved. So what is regression testing? Regression testing refers to the test cases used when testing the new version of the software and repeating the previous version test.

At any stage of the software life cycle, as long as the software changes, it may cause problems for the software. Software changes may be due to errors found and modifications, or they may be due to the addition of new modules during the integration or maintenance phase. When the errors contained in the software are discovered, if the error tracking and management system is not perfect, the corrections to these errors may be missed; and the developer does not have a thorough understanding of the errors, which may also lead to the corrections only correcting the errors Without fixing the error itself, the modification fails; the modification may also produce side effects, which may cause new problems in the unmodified part of the software, and cause errors in the originally working function. Similarly, when new code is added to the software, in addition to the newly added code that may contain errors, the new code may also affect the original code. Therefore, whenever the software changes, we must retest the existing functions in order to determine whether the modification has achieved the expected purpose and check whether the modification damages the original normal function. At the same time, new test cases need to be added to test new or modified functions. In order to verify the correctness of the modification and its impact, regression testing is required.

Regression testing plays an important role in the software life cycle. There are countless examples of serious consequences caused by ignoring regression testing. The software defect that led to the failure of the Ariane V-shaped rocket launch is due to the reuse of code that has not been fully returned. Tested. We often hear complaints from some customers, saying: "I used a function that had no problem before, why is there a problem now?" These are usually because in view of the business opportunities, implementation departments and other constraints on the software development cycle, developers are The software system adds or changes some system functions. Due to time constraints, it is clearly stated with the testing department that only the new or changed modules are tested, and other unmodified modules do not need to be tested or simply do not need to be tested. There are more or less connections between the various modules of the software system. It is very likely that other modules cannot work normally due to the newly added function changes. Therefore, only the new modules are tested, and the system is not tested for complete functions. The function that caused the previous application to have no problem now has a problem.

2. Smoke test
The name of the smoke test can be understood as the short time-consuming test, and only a bag of cigarettes is enough. Some people think that it is a vivid analogy to the basic function inspection of the new circuit board. After any new circuit boards are soldered, they should be energized and inspected first. If there are design defects, the circuit boards may be short-circuited and the board may smoke.

The object of the smoke test is every newly compiled software version that needs to be formally tested. The purpose is to confirm that the basic functions of the software are normal, and subsequent formal testing can be carried out. The executor of the smoke test is the version compiler or other developers.

In general software companies, in the process of writing software, multiple versions need to be compiled internally, but only a limited number of versions need to be formally tested (according to the project development plan). These intermediate test versions that need to be executed are just compiled, Software compilers need to perform routine tests, such as whether it can be installed/uninstalled correctly, whether the main functions are implemented, and data is severely lost. If you pass the test, you can perform a formal test according to the official test document. Otherwise, you need to recompile the version, and perform version building, packaging, and testing again until it succeeds. The benefit of smoke testing is that it can save a lot of time, labor, and material costs, and avoid serious problems such as packaging errors, serious lack of functions, and hardware component damage that cause software failures to cause a large number of testers to engage in meaningless testing labor.

3. Random test
Random test is a kind of test in which test data is randomly generated. For example, if we test the name field of a system, the length of the name can be up to 12 characters, then the following 12 characters may be randomly input: "ay5%,, i567aj". Obviously, no one calls such a name, and the field may not be allowed Some characters such as% appear, so we need to refine the randomly generated input set and omit some test sets that do not meet the requirements. And such randomly generated use cases may only cover a part of the equivalence classes, and a large number of cases cannot be covered. Such testing is sometimes called Monkey Testing.

Random testing has some disadvantages:

A. Tests are often not real;
B. Cannot achieve a certain coverage;
C. Many tests are redundant;
D. The same random number seed is needed to rebuild the test.

This kind of random test is of little use in many cases, and is often used as a means of "anti-collapse", or to verify whether the system can maintain normal when it is adversely affected. The author thinks that random testing is very useful when facing the Internet, especially the Internet and uncertain groups, because it is not only the users who really want to use the system, but also many people who are willing to attack the system and make garbage data. It is very useful to prevent the generation of a large amount of junk data. Many systems do not pay attention to controlling the input of junk data in the early stage, which leads to a rapid increase in data volume. Later, they have to do a data verification program to limit or delete junk data. , Invisibly increased the workload.

I recommend a software testing exchange group, QQ: 642830685. The group will share software testing resources, test interview questions and industry information from time to time. Friends can actively communicate and discuss related technical issues in the group, and there are also technical experts in the group to answer you.

Guess you like

Origin blog.csdn.net/weixin_53519100/article/details/112794407