Implement automated interface testing

Recently, I received a case of interface automation testing and conducted some research work. Finally, I found that using the pytest testing framework and executing test cases in a data-driven manner can achieve automated testing very well. The biggest advantage of this method is that subsequent use case maintenance has little impact on existing test scripts. Of course, pytest has the following other advantages:

  • Allows users to write more compact test suites;
  • There is not much boilerplate code involved, so users can easily write and understand various tests;
  • Test fixture functions are often used to add parameters to test functions and return different values. In pytest, you can modularize one fixture by using another. You can also use multiple fixtures to cover all parameter combinations without rewriting test cases;
  • Strong scalability, pytest has many practical plug-ins. For example: pytest-xdist can be used to perform parallel tests without using other testers; pytest-rerunfailures can rerun tests after failures, and the number of runs and the difference between runs

The delay time can be set; allure/pytest-html generates test reports;
Compared with other testing frameworks, such as Robot Framework (in terms of creating customized HTML reports It is relatively cumbersome and can at most be used to generate short reports in xUnit format), UniteTest/PyUnit (requires a lot of boilerplate code), and pytest is more suitable as the framework for this automated test.

The following is a detailed introduction to the implementation process of this automated test.

1 Preparatory work
1.1 Interface path table

According to the interface document, record the address and path of the interface as well as the request method in the excel table. Key: interface name, type: request method, value: interface path. The first line baseurl is the basic path, and type is not filled in. It is recommended that the interface name be consistent with the interface name in the interface document to facilitate checking. If there are multiple request methods for the same interface, you need to fill in a new line, and type is the corresponding request method. The interface path and request method are recorded in this way to facilitate subsequent data extraction and processing.

1.2 Test case table

The test case table mainly records 9 column types of data. Test module: dividing the interface according to modules is helpful for problem location and data classification;

 Test module: Divide the interface under test into modules according to functions;

Use case number: Mainly used to record the number of use cases. It is recommended to name it according to the module name, such as: login module, the use case number is login_001, login_002;

Use case title: record the content of the test;

Precondition: When the interface under test requires data support from other interfaces, fill in the required interface data in the precondition column: such as: login_001:token (login_001 refers to the use case number, token refers to the response parameter returned after the use case is executed) the value of the token field), the prerequisite is that the interface use case precedes this use case;

Test steps: facilitate the execution of module use cases;

Request interface: Fill in the key named in the interface path table. When you need to request a login interface, fill in the login interface named by the key in the table above. Request header: When there are special parameters in the request header, for example, the interface requires an authentication authorization field, and the data of this field comes from the token returned by the login interface, the request header for this use case should be filled in like this: Content- Type=application/json,Authorization=<token>;

Request data: Fill in the request data of the test case and record it in the key=value format. If you need return data from other interfaces, add it to the preconditions and then fill in the required return data in the request data, such as: username = admin,password=zxcvbnm,token=<token>;

Assertion: Make assertions based on the data returned by the interface, mainly to verify whether a certain field in the returned data is correct, and also to fill in the key=value format; 

2 Directory structure and operation process
2.1 File directory structure

  • testcase folder: stores the test case table;
  • api folder: stores the interface path table;
  • common folder: The common file stores common data processing scripts, such as data.py and utlis.py (the main function is to process the data in the table, which will be explained in detail later), config.py (test suite basic configuration);
  • report folder: used to store test reports generated after the test is completed;
  • conftest.py: a globally public file belonging to pytest. Some common methods can be placed in conftest.py;
  • pytest.ini: pytest configuration file;
2.2 Test running process

After the automated test is triggered, the pytest framework is not used for the extraction and processing of test data. After the data is processed into a test suite, it is assigned to pytest for execution by module, including test modules, http requests, and assertions. After all modules are executed, the test results will be reflected in the generated test report report.html. After the test is completed, the test or development team can be notified of the test results of this automated test via email or DingTalk robot.

3. Implementation process of test cases

The following briefly introduces the role of some scripts in the test case implementation process.

3.1 Read excel table

Use the xlrd library to read the contents of the excel table. There are many libraries in python that can operate on excel data, such as: openpyxl, xlsxwriter; loop through the data of each row, save it as a list and assign it to self.list_data.

# -*- coding: utf-8 -*-
import xlrd
class Excel(object):
    def __init__(self, file_name):
        # 读取excel
        self.wb = xlrd.open_workbook(file_name)
        self.sh = self.wb.sheet_names()
        self.list_data = []
 
    def read(self):
        for sheet_name in self.sh:
            sheet = self.wb.sheet_by_name(sheet_name)
            rows = sheet.nrows
            for i in range(0, rows):
                rowvalues = sheet.row_values(i)
                self.list_data.append(rowvalues)

3.2 Format data into test suites

In the first step, after saving the table data as a list, it is not in the data format we need. Such a data list cannot be used directly. Here, the data is formatted. Here you need to use the case_header configuration in config.py to replace the Chinese title with English and use it as the key value of the dictionary. Then loop through the data starting from the first element, and replace each element in the data with [{'key1': 'value1', 'key2': 39;value2'}, {}, {}, ...] is saved as list_dict_data and returned.

def data_to_dict(data):
    """
    :param data: data_list
    :return:
    """
    head = []
    list_dict_data = []
    for d in data[0]:
        d = case_header.get(d, d)
        head.append(d)
    for b in data[1:]:
        dict_data = {}
        for i in range(len(head)):
            if isinstance(b[i], str):
                dict_data[head[i]] = b[i].strip()
            else:
                dict_data[head[i]] = b[i]
        list_dict_data.append(dict_data)
    return list_dict_data

case_header = { 
	'测试模块': 'module', 
	'用例编号': 'id', 
	'用例标题': 'title', 
	'前置条件': 'condition', 
	'测试步骤': 'step', 
	'请求接口': 'api', 
	'请求方式': 'method', 
	'请求头部': 'headers', 
	'请求数据': 'data', 
	'断言': 'assert', 
	'步骤结果': 'score' }
3.3 Generate executable test suites

The previous step has processed the data into [{'key1': 'value1', 'key2': 'value2'}, {}, {}, ...], but found that this format does not integrate the use cases in the module. Each element in the list is a separate use case. This is not conducive to the execution of the use cases. Therefore, for the above The data returned in one step is processed again. Because the test cases and interface paths are stored in two excel tables, the data from the two tables need to be merged. First, read the data in the interface path table and process it into the required format {'key': {'type': 'value', ' url': 'value'}}, and then save the test cases in the steps dictionary in the order of the test steps. Since the code is too long, only the core part is shown below.

for d in data:
    # 将请求数据和断言数据格式化
    for key in ('data', 'assert', 'headers'):
        if d[key].strip():
            test_data = dict()
            for i in d[key].split(','):
                i = i.split('=')
                test_data[i[0]] = i[1]
            d[key] = test_data
    if d['module'].strip():
        if testcase:
            testsuite.append(testcase)
            testcase = {}
        testcase['module'] = d['module']
        testcase['steps'] = []
    no = str(d['step']).strip()
    if no:
        step = {'no': str(int(d['step']))}
        for key in ('id', 'title', 'condition', 'api', 'headers', 'data', 'assert'):
            if key == 'api':
                step[key] = {'type': apis[d.get(key, '')]['type'],
                             'url': apis['baseurl']['url'] + apis[d.get(key, '')]['url']}
            else:
                step[key] = d.get(key, '')
        testcase['steps'].append(step)
if testcase:
    testsuite.append(testcase)

3.4 pytest executes the test suite

Encapsulate the http request in conftest.py, use the data-driven features of pytest, and execute the test file test_login. py can be called directly without import. Only the code for initiating a post request is shown here. Other types of requests are similar. pytest.fixture passes data through the fixed parameter request. Then use pytest.mark.parametrize in 'mark' for parameterization and data-driven flexibility. Prepare test data and pre-dependent methods in the methods in the fixture, parameterize them in the test method, and the test method calls the prepared data and pre-dependent methods. pytest.mark.parametrize('post_request', data, indirect=True), indirect=True is to execute post_request as a function, and data is the test case of the module generated previously, which includes initiating\ All parameters required for http requests.

@pytest.fixture()
def post_request(request):
    data = request.param['data']
    header = request.param['headers']
    url = request.param['api']['url']
    no = request.param['no']
    logger.info(f'request: {data}')
    response = requests.request('POST', url=url, headers=header, data=json.dumps(data))
    logger.info(f'response: {response.json()}')
    return response, no
# -*- coding: UTF-8 -*-
import pytest
import allure
from common.data import module_data
 
 
class TestCase(object):
 
    @allure.feature('登录')
    @pytest.mark.parametrize('post_request', module_data(module='登录'), indirect=True)
    def test_login(self, post_request):
        response = post_request[0].json()
        no = int(post_request[1])
        assert response['msg'] == module_data(module='登录')[no - 1]['assert']['msg']
3.5 Run test cases

You can configure some file, class, method matching rules and common command parameters when executing tests in pytest.ini. During execution, you only need to enter D:\py_test>pytest on the command line to start executing automated tests. You can also run the command line without pytest.ini configuration: D:\py_test>pytest -s test_login.py --html=report/report.html, -s parameter: output the print information of all test cases, and install the pytest-html plug-in Finally, add --html=test report saving path to the execution command.

The pytest.ini file is configured as follows:

[pytest] 
# 打印print,生成保存报告 
addopts = -s --html=report/report.html 
# 匹配执行文件 
python_files = test_*.py 
# 匹配执行类 
python_classes = Test* 
# 匹配执行方法 
python_functions = test_*
3.6 Results display

You can execute the test case in the IDE or use the command line. After executing the test case, a test report in HTML format will be generated. You can view the test results of this automated test by opening it in a browser. pytest not only supports the pytest-html plug-in, but can also use allure to generate more beautiful test reports. The HTML test reports generated using pytest-html and allure are shown below respectively. The content recorded in the pytest-html report is relatively detailed, including use case running logs, the number of passed\failed\skipped use cases, use case running time, etc. The report generated by allure is relatively readable, and the test results can be seen intuitively.

Test report generated by pytest-html:

Test report generated by allure:

4 Summary

After the entire project is completed, I have a deeper understanding of the pytest testing framework. At the same time, pytest can also use Jenkins to add automated testing to continuous integration, set up scheduled task builds or condition-triggered builds, etc. This can effectively improve testing efficiency and save labor costs. Of course, this is not the only implementation method. The current implementation method still has many shortcomings, and we will continue to improve and improve it in the future. If you have any good suggestions and methods, please feel free to communicate and exchange them together.


              [The following is the most comprehensive software testing engineer learning knowledge architecture system diagram that I have compiled in 2023]


1. Python programming from entry to proficiency

2. Practical implementation of interface automation projects  

3. Web automation project actual combat


4. Practical implementation of App automation project  

5. Resumes of first-tier manufacturers


6. Test and develop DevOps system  

7. Commonly used automated testing tools

8. JMeter performance test  

9. Summary (little surprise at the end of the article)

life is long so add oil. Every effort will not be disappointed, as long as you persevere, you will eventually be rewarded. Cherish your time and pursue your dreams. Don’t forget your original intention and forge ahead. Your future is in your control!

Life is short and time is precious. We cannot predict what will happen in the future, but we can control the present. Cherish every day, work hard, and make yourself stronger and better. With firm belief and persistent pursuit, success will eventually belong to you!

Only by constantly challenging yourself can you constantly surpass yourself. Keep pursuing your dreams and move forward bravely, and you will find that the process of struggle is so beautiful and worthwhile. Believe in yourself, you can do it!​ 

Finally, I would like to thank everyone who reads my article carefully. Reciprocity is always necessary. Although it is not a very valuable thing, if you can use it, you can take it directly:

This information should be the most comprehensive and complete preparation warehouse for [software testing] friends. This warehouse has also accompanied tens of thousands of test engineers through the most difficult journey. I hope it can also help you!​ 

Guess you like

Origin blog.csdn.net/qq_48811377/article/details/134899995