Actual operation interface automation test

Recently, I received a case of interface automation testing, and carried out some research work, and finally found that using the pytest testing framework and executing test cases in a data-driven manner can achieve automated testing very well. The biggest advantage of this method is that the subsequent use case maintenance has little impact on the existing test scripts. Of course, pytest has the following other advantages:

  1. Allows users to write more compact test suites;
  2. There is not much boilerplate code involved, so users can easily write and understand various tests;
  3. Test fixture functions are often used to add a parameter to a test function and return a different value. In pytest, you can use a fixture to modularize another. At the same time, multiple fixtures can also be used to cover the test to all combinations of parameters without rewriting the test case;
  4. Strong scalability, pytest has many useful plugins. For example: pytest-xdist can be used to execute parallel tests without using other testers; pytest-rerunfailures can rerun after a test fails, and the number of runs and the delay between runs can be set; allure/pytest-html generates test reports;

Compared with other test frameworks, such as Robot Framework (which is cumbersome in creating custom HTML reports, at most it can be used to generate short reports in xUnit format), UniteTest/PyUnit (requiring a lot of boilerplate code), pytest is more suitable as The framework of this automated test.

The following is a detailed introduction to the implementation process of this automated testing.

1 Preliminary preparations

1.1 Interface path table

According to the interface document, record the address, path and request method of the interface in the excel sheet, key: interface name, type: request method, value: interface path, the first line baseurl is the basic path, and type is left blank. It is recommended that the interface name be consistent with the interface name in the interface document, which is convenient for checking. If there are multiple request methods for the same interface, you need to fill in a new line, and type is the corresponding request method. The purpose of recording the interface path and request method in this way is to facilitate subsequent data extraction and processing.

1.2 Test case table

The test case table mainly records 9 columns of data, and the test module: Dividing the interface according to the module is beneficial to the positioning of the problem and the classification of the data;

  • Test module: Divide the tested interface into modules according to the functions;
  • Use case number: mainly used to record the number of use cases, it is recommended to name it according to the module name, such as: login module, the use case number is login_001, login_002;
  • Use case title: record the content of the test;
  • Precondition: When the tested interface needs data support from other interfaces, fill in the required interface data in the precondition column: such as: login_001:token (login_001 refers to the use case number, token refers to the response parameter returned after the use case is executed The value of the token field in ), the prerequisite is that the interface use case is before this use case;
  • Test steps: facilitate the execution of module use cases;
  • Request interface: Fill in according to the name of the key in the interface path table. When you need to request a login interface, fill in the login interface named by the key in the table above. Request header: When there are special parameters in the request header, for example, the interface requires the authentication authorization field, and the data in this field comes from the token returned by the login interface, the request header of this use case should be filled in as follows: Content- Type=application/json,Authorization=<token>;
  • Request data: Fill in the request data of the test case and record it in the format of key=value. If you need the return data of other interfaces, you can fill in the return data required in the request data after adding it in the precondition, such as: username = admin,password=zxcvbnm,token=<token>;
  • Assertion: Make an assertion based on the data returned by the interface, mainly to verify whether a certain field in the returned data is correct, and also fill in it in the format of key=value;

2 Directory structure and operation process

2.1 File directory structure

 

  • testcase folder: store the test case table;
  • api folder: store the interface path table;
  • common folder: Common data processing scripts are stored in the common file, such as data.py and utlis.py (the main function is to process the data in the table, which will be described in detail later), config.py (test suite's basic configuration);
  • report folder: used to store the test report generated after the test is completed;
  • conftest.py: a global public file belonging to pytest, some common methods can be placed in conftest.py;
  • pytest.ini: pytest configuration file;

2.2 The running process of the test

After the automated test is triggered, the extraction and processing of test data will not use the pytest framework. After the data is processed into a test suite, it will be assigned to pytest for execution by module, including test modules, http requests, and assertions. After all modules are executed, the test results are reflected in the generated test report report.html. After the test is over, you can notify the test or develop the test results of this automated test by email or DingTalk robot.

3 Implementation process of test cases

The following briefly introduces the role of some scripts in the test case implementation process.

3.1 Read excel table

Use the xlrd library to read the contents of the excel table. There are many libraries in python that can operate on excel data, such as: openpyxl, xlsxwriter; loop through the data of each row, save it as a list and assign it to self.list_data.

# -*- coding: utf-8 -*-
import xlrd
class Excel(object):
    def __init__(self, file_name):
        # 读取excel
        self.wb = xlrd.open_workbook(file_name)
        self.sh = self.wb.sheet_names()
        self.list_data = []

    def read(self):
        for sheet_name in self.sh:
            sheet = self.wb.sheet_by_name(sheet_name)
            rows = sheet.nrows
            for i in range(0, rows):
                rowvalues = sheet.row_values(i)
                self.list_data.append(rowvalues)

3.2 Format data into a test suite

After saving the table data as a list in the first step, it is not the data format we need. Such a data list cannot be used directly. Here is a data format. Here you need to use the case_header configuration in config.py to replace the Chinese title with English and use it as the key value of the dictionary. Then loop through the data from the first element, and each element in the data is in the form of [{'key1': 'value1', 'key2': 'value2'}, {}, {}, ...] Save as list_dict_data and return.

def data_to_dict(data):
    """
    :param data: data_list
    :return:
    """
    head = []
    list_dict_data = []
    for d in data[0]:
        d = case_header.get(d, d)
        head.append(d)
    for b in data[1:]:
        dict_data = {}
        for i in range(len(head)):
            if isinstance(b[i], str):
                dict_data[head[i]] = b[i].strip()
            else:
                dict_data[head[i]] = b[i]
        list_dict_data.append(dict_data)
    return list_dict_data
case_header = { 
	'测试模块': 'module', 
	'用例编号': 'id', 
	'用例标题': 'title', 
	'前置条件': 'condition', 
	'测试步骤': 'step', 
	'请求接口': 'api', 
	'请求方式': 'method', 
	'请求头部': 'headers', 
	'请求数据': 'data', 
	'断言': 'assert', 
	'步骤结果': 'score' }

3.3 Generating an executable test suite

In the previous step, the data has been processed into a format like [{'key1': 'value1', 'key2': 'value2'}, {}, {}, ...], but it is found that such a format does not include the module The use cases are integrated together, and each element in the list is a separate use case, which is not conducive to the execution of the use case, so the data returned in the previous step is processed again. Because the test case and interface path are stored in two excel sheets, the data in the two sheets needs to be merged. First read the data in the interface path table and process it into the required format {'key': {'type': 'value', 'url': 'value'}}, and then follow the order in the test steps to test Use cases are kept in the steps dictionary. Due to the length of the code, only the core part is shown below.

for d in data:
    # 将请求数据和断言数据格式化
    for key in ('data', 'assert', 'headers'):
        if d[key].strip():
            test_data = dict()
            for i in d[key].split(','):
                i = i.split('=')
                test_data[i[0]] = i[1]
            d[key] = test_data
    if d['module'].strip():
        if testcase:
            testsuite.append(testcase)
            testcase = {}
        testcase['module'] = d['module']
        testcase['steps'] = []
    no = str(d['step']).strip()
    if no:
        step = {'no': str(int(d['step']))}
        for key in ('id', 'title', 'condition', 'api', 'headers', 'data', 'assert'):
            if key == 'api':
                step[key] = {'type': apis[d.get(key, '')]['type'],
                             'url': apis['baseurl']['url'] + apis[d.get(key, '')]['url']}
            else:
                step[key] = d.get(key, '')
        testcase['steps'].append(step)
if testcase:
    testsuite.append(testcase)

3.4 pytest executes the test suite

Encapsulate the http request in conftest.py, use the data-driven features of pytest, and execute the test file test_login. py can be called directly without import. Only the code for initiating a post request is shown here. Other types of requests are similar. pytest.fixture passes data through the fixed parameter request. Then use pytest.mark.parametrize in 'mark' to be more flexible for parametrization and data-driven. Prepare test data and pre-dependent methods in the method in fixture, parameterize in the test method, and call the prepared data and pre-method in the test method. pytest.mark.parametrize('post_request', data, indirect=True), indirect=True is to execute post_request as a function, and data is the test case of the previously generated module, including what is needed to initiate \http request All parameters.

@pytest.fixture()
def post_request(request):
    data = request.param['data']
    header = request.param['headers']
    url = request.param['api']['url']
    no = request.param['no']
    logger.info(f'request: {data}')
    response = requests.request('POST', url=url, headers=header, data=json.dumps(data))
    logger.info(f'response: {response.json()}')
    return response, no
# -*- coding: UTF-8 -*-
import pytest
import allure
from common.data import module_data


class TestCase(object):

    @allure.feature('登录')
    @pytest.mark.parametrize('post_request', module_data(module='登录'), indirect=True)
    def test_login(self, post_request):
        response = post_request[0].json()
        no = int(post_request[1])
        assert response['msg'] == module_data(module='登录')[no - 1]['assert']['msg']
def module_data(module):

    excel = Excel(file_path.parent / 'testcase/testcase.xlsx')
    excel.read()
    cases = excel.list_data
    test_suit = suite_cases(data_to_dict(cases))
    for _ in test_suit:
        if _['module'] == module:
            data = _['steps']
            return data

3.5 Running Test Cases

You can configure some files, classes, method matching rules and common command parameters when executing tests in pytest.ini. When executing, you only need to enter D:\py_test>pytest on the command line to start executing automated tests. It is also possible to execute D:\py_test>pytest -s test_login.py --html=report/report.html on the command line without pytest.ini configuration, the -s parameter: output the print information of all test cases, and install the pytest-html plug-in Finally, add --html=test report save path to the execution command.

The pytest.ini file is configured as follows:

[pytest] 
# 打印print,生成保存报告 
addopts = -s --html=report/report.html 
# 匹配执行文件 
python_files = test_*.py 
# 匹配执行类 
python_classes = Test* 
# 匹配执行方法 
python_functions = test_*

3.6 Result display

The test case can be executed in the ide or the command line. After the test case is executed, a test report in html format will be generated, and the test result of this automated test can be viewed by opening it in the browser. pytest not only supports the pytest-html plug-in, but also uses allure to generate more beautiful test reports. The following shows the html test reports generated by using pytest-html and allure respectively. The content recorded in the pytest-html report is more detailed, including the use case running log, the number of passed/failed/skipped use cases, the running time of the use case, etc. The report generated by allure is relatively readable, and the test results can be seen intuitively.

The test report generated by pytest-html:

Test report generated by allure:

4 Summary

After the completion of the entire project, I have a deeper understanding of the pytest testing framework. At the same time, pytest can also use Jenkins to add automated testing to continuous integration, set up scheduled task builds or conditionally triggered builds, etc., which can effectively improve test efficiency and save labor costs. Of course, this is not the only way of implementation, the current way of implementation still has many deficiencies, and will continue to be perfected and improved later.

Friends who are studying for the test can click the small card below

Guess you like

Origin blog.csdn.net/m0_68405758/article/details/131811428