Best Engineering Practices for Interface Automation Testing (ApiTestEngine)

Table of contents

Foreword:

background

core features

Feature Disassembly Introduction

written in the back


Foreword:

Interface automated testing is an indispensable part of modern software development. A good testing framework and best engineering practices can improve testing efficiency and quality.

background

There are already a lot of interface testing tools currently on the market, common ones such as Postman, , JMeter, RobotFrameworketc. I believe most testers have used them, at least from the descriptions of most resumes they come into contact with. In addition to these mature tools, there are also many testers (developers) with certain technical capabilities who have developed some interface test frameworks by themselves, and the quality is also uneven.

However, when I planned to implement interface automation testing in the project team, I searched a circle, and I didn't find a particularly satisfactory tool or framework, which always had a certain gap with the ideal concept.

So what should the ideal interface automation testing framework look like?

Testing tools (frameworks) out of business usage scenarios are hooligans! So let's take a look at some common scenarios in our daily work.

  • When a test or developer locates a problem, they want to call an interface to check whether it responds normally;
  • When testers manually test a certain function point, they need an order number, and this order number can realize the order process by calling multiple interfaces in sequence;
  • Before starting the functional test of the version, the tester can first check whether all the interfaces of the system are working normally, and then start manual testing after ensuring that the interfaces are normal;
  • Developers need to check whether the new code affects the existing interface of the system before submitting the code;
  • The project team needs to regularly check the working conditions of all interfaces in the test environment every day to ensure that the submitted code of the day does not cause damage to the code of the main branch;
  • The project team needs to regularly (30 minutes) check the working status of all interfaces in the production environment, so as to find out the unavailability of production environment services in time;
  • The project team needs to conduct performance tests on core business scenarios from time to time, hoping to reduce manpower investment and directly reuse the work results in interface tests.

As you can see, the scenarios listed above should be familiar to everyone, and these are things we often need to do in our daily work. However, without a suitable tool, the efficiency is often very low, or some important work is not carried out at all, such as interface regression testing, online interface monitoring, etc.

Let me talk about the simplest manual call interface test first. Some people may say that Postmanit can meet the needs. Indeed, Postmanas a general-purpose interface testing tool, it can construct interface requests and view interface responses. From this level, it meets the functional requirements of interface testing. But in specific projects, the use Postmanis not so efficient.

Let me give you the most common example.

There are many request parameters for an interface, and the interface request requires MD5signature verification; the way of signature is to include a parameter in Headers sign, and the value of the parameter is obtained by calculating the concatenated strings of URL, , Methodand .BodyMD5

Recall what we did when we wanted to test this interface. First, we need to manually fill in all the interface parameters according to the description of the interface document; then, according to the signature verification method, we splice all the parameter values ​​to get a string, calculate its MD5 value in another MD5 calculation tool, and set Fill in the parameters with the signature value sign; finally, initiate the interface request, check the interface response, and manually check whether the response is normal. The worst thing is that every time we need to call this interface, the above work has to be done again. The actual result of this is that when facing an interface with many parameters or requiring signature verification, testers may choose to ignore the interface test.

In addition to calling a single interface, many times we also need to combine multiple interfaces for calling. For example, when testing a logistics system, testers often need an order number generated under a specific combination of conditions. However, since there are many businesses related to the order number, it is difficult to generate it directly in the database. Therefore, the current common practice of business testers is to simulate the order process every time an order number is needed, and call multiple corresponding interfaces in sequence to generate the required of the order number. As you can imagine, when it is so troublesome to manually call a single interface, how time-consuming and laborious it will be to manually call multiple interfaces every time.

Let's talk about the interface automation call test. This is supported by most interface testing frameworks. The common method is to write interface test cases through code, or adopt a data-driven approach, and then in the case of supporting command line (CLI) calls, you can combine or implement continuous integration Jenkins, crontabor The function of timing interface monitoring.

There is no problem with the idea, the problem lies in the promotion and implementation in the actual project. To say that the most reliable way to maintain automated test cases is to write test cases directly through code, which is reliable and flexible. This is also the perception of many veterans who have experienced painful lessons. There are even some anti-test framework remarks on the Internet. . But the problem is that not all testers in the project can write code, nor can they learn it immediately after mandatory requirements. In this case, it is difficult to promote interface automation testing in specific projects. Even if I can help write a part, but many times the interface test cases also need to be combined with business logic scenarios, and I really can’t invest in this aspect. Too much time, after all, there are too many docking projects. Therefore, for this reason, many test frameworks advocate a data-driven approach to separate business test cases from execution code. However, due to the complex business scenarios in many cases, most of the framework test case template engines have insufficient expressive ability, and it is difficult to describe the test scenarios in a concise way, so they cannot be well promoted and used.

There are still many problems that can be listed, and these are indeed real pain points in the daily testing work of Internet companies.

Based on the above background, I had the idea of ​​developing ApiTestEngine .

For the positioning of ApiTestEngine , it is not so much a tool or a framework, it should be more a set of best engineering practices for interface automation testing, but it 简洁优雅实用should be its core feature.

Of course, each engineer 最佳工程实践has more or less differences in his ideas, and I hope that everyone can communicate more and make progress together in the collision of thinking.

core features

The core features of ApiTestEngine are summarized as follows:

  • Support multiple request methods of the API interface, including GET/POST/HEAD/PUT/DELETE, etc.
  • The test case is separated from the code, the test case maintenance method is simple and elegant, and supportsYAML
  • The test case description method is expressive, and the input parameters and expected output results can be described in a concise manner
  • Interface test cases are reusable, making it easy to create complex test scenarios
  • The test execution method is simple and flexible, and supports single interface call test, batch interface call test, and scheduled task execution test
  • The test result statistical report is concise and clear, with detailed log records, including interface request time-consuming, request response data, etc.
  • Take on multiple roles, and implement interface management, interface automation testing, and interface performance testing at the same time (combined with Locust)
  • Scalable, easy to expand and realize Web platformization

Feature Disassembly Introduction

Support multiple request methods of the API interface, including GET/POST/HEAD/PUT/DELETE, etc.

Personal preference, choose Python as programming language. The best way to use Python to implement HTTP requests is to use the Requests library, which is simple, elegant and powerful.

The test case is separated from the code, the test case maintenance method is simple and elegant, and supportsYAML

To realize the separation of test cases and codes, the best way is to build a test case loading engine and a test case execution engine. This is also the most elegant implementation method summarized when working on the AppiumBooster framework. Of course, it is necessary to formulate a standard data structure specification for the test case in advance, as a bridge between the test case loading engine and the test case execution engine.

It should be noted that the test case data structure must contain complete information elements of the interface test case, including the information content of the interface request (URL, Headers, Method and other parameters), and the expected interface request response results (StatusCode, ResponseHeaders, ResponseContent).

The advantage of this is that no matter what form the test case is described in (YAML, JSON, CSV, Excel, XML, etc.), and whether the test case adopts the organizational idea of ​​business layering, as long as the corresponding test case is implemented in the test case loading engine Converters can convert business test cases into standard test case data structures. For the test case execution engine, it does not need to pay attention to the specific description form of the test case, but only needs to obtain the test case information elements from the standard test case data structure, including interface request information and expected interface response information, and then construct and initiate HTTP request, and then compare and judge the response result of the HTTP request with the expected result.

As for why it is clearly stated that YAML is supported, it is because I personally think that this is the best way to describe test cases. The expression is concise and not cumbersome, and it can also contain very rich information. Of course, this is just a personal preference. If you like to use other methods, you only need to expand and implement the corresponding converter.

The test case description method is expressive, and the input parameters and expected output results can be described in a concise manner

​After the test case is separated from the framework code, the task of describing the business logic test scenario falls on the test case. For example, if we choose to use YAML to describe test cases, then we should be able to describe various complex business scenarios in YAML.

So how do you understand this "expressiveness"?

Simple parameter value passing should be easy to understand. Let's give a few relatively complex but common examples.

  • The interface request parameter should contain the current timestamp;
  • The interface request parameter should contain a 16-bit random string;
  • The interface request parameters include signature verification, and the md5 value needs to be obtained after splicing multiple request parameters;
  • The interface response header (Headers) must contain a X-ATE-Vheader field, and it is necessary to determine whether the value is greater than 100;
  • The interface response result contains a string, and it is necessary to check whether the string contains a 10-digit order number;
  • The interface response result is a multi-layer nested json structure, and it is necessary to judge whether an element value of a certain layer is True.

It can be seen that the above examples cannot directly describe the parameter values ​​​​in the test cases. If you use Python scripts to write test cases, it is easy to solve, you only need to implement it through Python functions. But now that the test case and the framework code are separated, we cannot execute Python functions in YAML, what should we do?

The answer is to define function escape characters and implement custom templates.

This approach is actually not difficult to understand, and it can be regarded as a common way for template languages. For example, if we ${}define it as an escape character, then the content {}inside is no longer regarded as an ordinary string, but should be escaped as a variable value, or execute the function to get the actual result. Of course, this requires us to implement adaptation in the test case execution engine. The easiest way is to extract ${}the string in the test case and evalcalculate the value of the expression. If we want to achieve more complex functions, we can also encapsulate some functions commonly used in interface testing into a set of keywords, and then use these keywords when writing test cases.

Interface test cases are reusable, making it easy to create complex test scenarios

In many cases, the interface of the system is associated with business logic. For example, to request to call the login interface, you need to first request the interface to obtain the verification code, and then bring the obtained verification code in the login request; and to request the interface for data query, you must also include the session returned by the login interface in the request parameters value. At this time, if we separately describe the interface to be requested for each business logic to be tested, it will cause a lot of repeated descriptions, and the maintenance of test cases is also very bloated.


A better way is to package each interface call individually as a test case, and then select the corresponding interface when describing the business test scenario, and stitch them into business scenario test cases in order, just like building blocks. If you have read the introduction of AppiumBooster before, you should also think that we can form common functions into module use case sets, and then assemble the module use case sets at a higher level to achieve more complex test scenarios.

However, there is a very critical problem to be solved here, which is how to pass parameters before the interface test case. In fact, it is not complicated to implement. We can specify a variable name in the interface request response result, and then extract the key value returned by the interface and assign it to that variable; then pass this in other interface request parameters ${变量名}.

The test execution method is simple and flexible, and supports single interface call test, batch interface call test, and scheduled task execution test

From the examples in the background, it can be seen that there are many scenarios where interface testing tools are required. In addition to regular automatic testing and detection of all interfaces, interface testing tools are also required to assist in manual testing in many cases, that is, the mode 半手工+半自动化of .

When business testers use test tools, the biggest problem is that in addition to paying attention to the business functions themselves, they also need to spend a lot of time dealing with technical implementation details, such as signature verification, and often the latter Takes more time in repeated operations.

This problem is indeed unavoidable. After all, the interfaces of different systems vary widely, and it is impossible for a tool to automatically handle all situations. But we can try to separate the implementation of the technical details of the interface from the business parameters, so that business testers only need to focus on the business parameters.

Specifically, we can configure a template for each interface, and encapsulate parameters and technical details that are not related to business functions, such as signature verification, timestamp, random value, etc., while parameters related to business functions can be configured as passable reference mode.

The advantage of this is that we only need to encapsulate and configure parameters and technical details that have nothing to do with business functions, and this work can be implemented by developers or test developers, reducing the pressure on business testers; after the interface template is configured, test Personnel only need to pay attention to the parameters related to the business. Combined with the business test cases, multiple interface test cases can be easily configured and generated on the basis of the interface template.

The test result statistical report is concise and clear, with detailed log records, including interface request time-consuming, request response data, etc.

Statistical reports of test results should follow the principle of succinct but not simple. "Conciseness" is because most of the time we only need to judge whether all interfaces are running normally in the shortest time. And "not simple", because when there are test cases that fail to execute, we expect to obtain as detailed data as possible during interface testing, including test time, request parameters, response content, and interface response time.

​When I was reading the source code of locust before, I was deeply impressed by the way it encapsulates the HTTP client. It adopts the method of inheriting the requests.Session class, rewriting and covering the request method in the subclass HttpSession, and then encapsulating requests.Session.request in the request method.

request_meta = {}

# set up pre_request hook for attaching meta data to the request object
request_meta["method"] = method
request_meta["start_time"] = time.time()

response = self._send_request_safe_mode(method, url, **kwargs)

# record the consumed time
request_meta["response_time"] = int((time.time() - request_meta["start_time"]) * 1000)

request_meta["content_size"] = int(response.headers.get("content-length") or 0)

​And each virtual user (client) of HttpLocust is an HttpSession instance, so that every time an HTTP request is executed, the powerful functions of the Requests library can be fully utilized, and the response time of the request, the size of the response body, etc. The original performance data is preserved, and the implementation can be described as very elegant.

Inspired by this, the same method can be used to save the detailed request response data of the interface. For example, to save Response, Headersyou Bodyonly need to add the following two lines of code:

request_meta["response_headers"] = response.headers
request_meta["response_content"] = response.content

Take on multiple roles, and implement interface management, interface automation testing, and interface performance testing at the same time (combined with Locust)

In fact, requirements such as interface performance testing should not be counted within the scope of responsibility of the interface automation testing framework. But in the actual project, the requirements are like this, and interface automation testing and interface performance testing are required, and two sets of codes are not maintained at the same time.

Thanks to locustthe performance testing framework, interface automation and performance test scripts can really be combined into one.

As mentioned earlier, HttpLocusteach virtual user (client) is an HttpSessioninstance HttpSessionand inherits from a requests.Sessionclass, so HttpLocusteach virtual user (client) is also requests.Sessionan instance of a class.

Similarly, when we use the Requests library for interface testing, the request client is actually an instance of the requests.Session class, but we usually use the simplified usage of requests.

The following two usages are equivalent.

resp = requests.get('http://debugtalk.com')

# 等价于
client = requests.Session()
resp = client.get('http://debugtalk.com')

With this layer of relationship, it is easy to switch between interface automation testing and performance testing. Within the interface testing framework, HTTPthe client can be initialized as follows.

def __init__(self, origin, kwargs, http_client_session=None):
   self.http_client_session = http_client_session or requests.Session()

By default, http_client_sessionit is requests.Sessionan instance, which is used for interface testing; when performance testing is required, only the passed-in locustinstance is required HttpSession.

Scalable, easy to expand and realize Web platformization

When it comes to promoting the test platform to a wider user group (such as product managers, operations personnel), it is inevitable to webify the framework. It is indeed much more convenient to view the running status of interface test cases, configure interface modules, and manage interface test cases on the Web platform.

However, for the interface testing framework, Web平台it can only be regarded as an icing on the cake. In the early stage, we can give priority to the command line (CLI) calling method, standardize the data storage structure, and then combine the Web framework (such as Flask) to increase the realization of Web platform functions.

  As someone who has been here, I also hope that everyone will avoid some detours

Here I will share with you some necessities of the way forward in automated testing, hoping to help you.

(WEB automated testing, app automated testing, interface automated testing, continuous integration, automated test development, big factory interview questions, resume templates, etc.)

I believe it can make you better progress!

Click on the small card below

Guess you like

Origin blog.csdn.net/Free355/article/details/131722053