Thinking and Technical Realization of Interface Automated Testing

1. Thinking

What is automated testing?

Automated testing is the act of converting human-driven into computer-driven testing.

Human's testing behavior of the interface:

  • The first step: understand the business requirements. Generally speaking, you can understand the behavior and description of the interface from the requirements. Behavior: what is the situation, what to do, and what happens. Description: What is the situation, what is the operation, etc.
  • Step 2: Check (review) whether the input and return of the interface document meet the requirement description.
  • Step 3: Construct pre-data according to business requirements.
  • Step 4: Request the interface according to the input parameters of the interface document.
  • Step 5: Check whether the return value conforms to the required behavior and description. For the interface to update the database, check whether the database changes conform to the required description.

How to automate human behavior with minimal cost?

Two options are envisaged

The first idea:

The whole integrated large system is regarded as the test object, all interfaces are regarded as operation controllers for this system, but the interfaces themselves are regarded as black boxes. In this way, we start from the first interface at the top of the business process, passing parameters will generate the first business data or business behavior, and then use the generated primary business data to call the interface that is relatively downstream of the business, thereby generating Downstream business data and business behavior. By analogy, in this way, all interfaces of the entire system are eventually covered.

Meaning (what is the meaning of doing this?)

For an integrated complex system, this can not only simulate the behavior of the business more realistically, but also generate relatively complete business data.

Problems and challenges

  • 1: The test scene is not granular enough. An automated case involves the call of multiple interfaces, so an automated case is actually a business process. Such an automated granularity is a business process. If the case is set too much, it may cause the case to expand, and the test focus may not be found. High cost of post-maintenance. Therefore, the amount of case cannot be set too much, only some macro-level main-line business cases can be set, and some branch-line business cases cannot be included.
  • 2: The problem location is not accurate. Automated testing found that the problem could not be accurately located which interface caused the problem. Since we regard the interface as a black box, the case asserts that the return value of each interface, then it is possible that the behavior of an interface may be abnormal, but The return value of this interface is correct. For example, this interface updates the data of a database field less, and calls a third-party interface less. Therefore, this problem will continue to sneak into the downstream business, and it will not be discovered until it really affects the business behavior asserted by the automated test.
  • 3: The scene cannot be closed loop. Our current system has not yet formed a complete interface system. Due to the lack of interfaces for performing business behaviors, business operations cannot be completed by just calling each interface.

The second idea:

Due to the first challenge, we changed the test object from the entire system to a single interface, so that we set various normal and abnormal test cases for this interface, the business data is not provided by the upstream module, but used The one-step direct method injects the business data into the data table that the interface depends on. In order to enable the stable and continuous execution of the automated case, this one-step direct step of the data must be included in the automated test case. At the same time, we white-box the interface, read the interface code while understanding the business requirements, and get the details of the interface for processing business data, so that we can enrich the test scenarios at the two levels of business logic and development logic, and set some more critical Test cases.

Meaning (what is the meaning of doing this?)

  • 1: This can effectively form a complementary relationship with unit testing. The unit test presupposes that each method can be assembled into a module or interface, and then the unit test focuses on whether each individual method is correct. The single-interface test draws on the idea of ​​unit testing. The single-interface test presupposes that each interface can be assembled into a system, and then the single-interface test focuses on whether the individual interfaces are correct.
  • 2: High cost performance. Since the test object is reduced from a business system to an interface, upstream and downstream dependencies are shielded by injecting data, and an interface can produce a large number of cases at a very low cost.

Problems and challenges

  • 1: Large investment in technology. Obviously writing in this way has a certain degree of complexity, so the investment is relatively large. Due to the one-step direct access to data and the white-box testing of the interface, the system structure and third-party middleware need to be mastered, so a higher technical investment and time investment are required.
  • 2: The ratio of income to capital collection is to be measured. Some businesses are developing rapidly, the product form is not yet stable, and the number of interface outputs and changes are large. The value generated by the business and the input cost of automated testing need to be measured.
  • 3: After case expansion, it is a challenge to the code quality level of the case itself, such as the code organization structure of the test case project and the architecture of the automated test system.

2. Technical realization

Before implementation, we investigated some open source frameworks, technologies, and solutions on the market. At present, automated testing solutions are currently divided into two camps.

The first camp is codeless. In this way, it is hoped that setting the URL, parameters, request data, and expected return data from an interface will form a case. Cases in this way usually have a strong dependence on the business data stored in the database and the current data state, so data changes have a great impact on the stability of the case, and due to the limitations of the automated use case editor itself, The abundance of test scenarios and the verification of interface return values ​​cannot be comprehensive and effective. If you want to implement a powerful, programmable automation use case editor, the cost is huge.

The second camp is to realize the automated execution process of a case through code. Drawing lessons from the UnitTest framework, test cases are divided into a data preparation process, an interface request process, a return value assertion process, and a data cleaning process. A case written in this way is composed of lines of code. A set of automated test cases for a system is a code project. When the number of cases is small, there is no big problem in writing efficiency and execution efficiency. When the number of cases increases, quantitative changes cause qualitative changes in case writing and maintenance efficiency and execution efficiency. How to organize code architecture, enhance code reusability, and design patterns Become a challenge.

According to factors such as investment cost, flexibility, writing and execution efficiency, and the diversity of test scenarios, most companies in the industry choose to automate cases through coding. Therefore, based on the above considerations, it was decided to choose the automation solution of the second camp. The degree of realization of the automated use cases we will implement is the entire process of automated testing of interface testing, including data construction, input parameter definitions, expected return value definitions, interface requests, interface return value assertions, and data cleanup.

We expect automated test cases to have these features

Use python scripting language to describe interface test cases. Each python script file contains a test class, which is called a test suite. Each test class (test suite) only tests one interface. The test class (test suite) contains several use cases for the interface test.

In order to facilitate data processing and pre-manufacturing scenarios, the test class should include the following methods:

The "suite_setUp method" that should be executed before all use cases are executed. The "suite_tearDown method (suite_tearDown)" that should be executed after all use cases are executed. The "use case start method (case_setUp)" that should be executed before each use case is executed. The "use case teardown method (case_tearDown)" that should be executed after each use case is executed. The test case method "test case (test_xxx)" starts with test.

Execution mechanism

 

Use case engineering structure

A corresponding test project can be created for each tested project. The test project is divided into directories freely according to modules. Each interface in the directory corresponds to a test suite (a py file), and each suite contains several cases for the interface.

Through the tree-shaped case organization structure, the dependency problem between cases is solved. A case or an interface test or a test suite set can be executed separately. At the same time, it is found that such results are easier to debug, debug and maintain the case.

Write test cases

 

Each test case is written through a set of templates similar to this. Test cases are generally written in the defined test methods beginning with test, and different test ideas are adopted for different types of interfaces.

Get type interface automation case writing steps

  • 1: Create test data/reuse existing data in the database by directly reading and writing the database or calling the upstream interface.
  • 2: Define a dictionary to indicate the data to be requested.
  • 3: Define a dictionary to represent the expected return data (generally return data is json)
  • 4: Use the HttpRequest class library to initiate a request interface.
  • 5: Use the assertion method in the Should library to compare whether the json returned by the interface is the same as the defined dictionary data.
  • 6: Clean up the test data generated in the test process by operating the database.

Post-type interface automation case writing steps

  • 1: Define a dictionary to indicate the data to be requested.
  • 2: Define a dictionary to indicate expected return data (generally return data is json).
  • 3: Because the post interface will operate on the database and only looking at the interface return value is not enough to prove the correctness of the interface behavior. 4. It is necessary to define a dictionary to represent the expected data in the database.
  • 4: The request interface uses the HttpRequest class library to initiate the request interface.
  • 5: Use the assertion method in the Should library to compare whether the json returned by the interface is the same as the defined dictionary data.
  • 6: Use the DB class library to query the data that needs to be verified in the database. This data is the actual data in the database.
  • 7: Use the assertion method in the Should library to compare whether the actual database in the database is the same as the expected data in the database.
  • 8: Clean up the test data generated during the test by operating the database.

The overall composition of the interface automation test framework

 

The automation framework provides two-end usage. The green part on the left is the SDK provided to automation use case developers, and the blue part on the right is the visualization system for case management, execution and result viewing.

Among them, the SDK provides developers with development specifications and functions such as data definition, data drive, scene entry, execution strategy, tool library, and assertion.

The developed case can be loaded into the running container for execution, and the execution result can be displayed on the web visualization system.

The web visual management system provides the functions of execution control, policy configuration, result display, and problem analysis. The web management terminal generates jobs in Jenkins by calling the Jenkins interface, and calls the running container through the Jenkins task orchestration mechanism to execute the corresponding automated test cases or test suites. set.

The case result focuses on analyzing the problem of the failed case after viewing the case execution result, and marking and recording the analysis result through the web visual management system.

The above are some of my thoughts and some technical solutions for interface automation testing in my actual work. If you have different opinions, please leave a message for discussion below.

Those who are interested in software testing can also follow my official account: Programmer Erhei, focusing on software testing sharing, mainly sharing testing foundation, interface testing, performance testing, automated testing, TestOps architecture JmeterLoad, Runner, Fiddler, MySql, Linux , Resume optimization, interview skills, and actual video materials for large-scale test projects. If you are interested, please pay attention

Share the wonderful content with your friends

Guess you like

Origin blog.csdn.net/m0_53918927/article/details/113559533