Ideas for writing automated use cases (using pytest to write a test script)

Table of contents

1. Define the test object

Second, write test cases

construct request data

Package test code

Assertion settings

3. Execute the script to obtain the test results

Four, summary


After the previous study, we tried to use the pytest framework to write an interface automation test case to clarify the idea of ​​​​writing interface automation use cases.

When we search for weather query on Baidu, the results shown in the figure below will appear:

Next, we take the weather query interface as an example to write an interface test case script.

1. Define the test object

To do an interface test for a certain function, we first need to determine which interface is called to implement this function. The specific information of this interface (such as function, protocol, URL, request method, request parameter description, response parameter description, etc.) can be viewed by viewing The interface documents provided by the development can also be obtained through packet capture (in the absence of interface documents). After finding the corresponding interface, that is, the test object, the next step can be carried out purposefully.

1. Obviously there is no interface document to provide interface-related information here, and we don’t even know the request url, so first capture the packet with Fiddler to obtain the interface information.

Through packet capture, we captured the information of this interface as follows:

Request url: https://weathernew.pae.baidu.com/weathernew/pc

Request method: GET

Request parameters:

2. After capturing the above interface information, we first write a simple script to request the interface, as follows:

url = "https://weathernew.pae.baidu.com/weathernew/pc"
params = {
	"query": "浙江杭州天气",
	"srcid": 4982
}
res = requests.get(url=url, params=params)
print(res.status_code)
print(res.text)

Run the code, the interface debugging is passed, and the results can be obtained, as follows:

3. Clarify requirements and determine use cases.

When we do automated testing for an interface, we need to clarify the test points that the use case needs to verify. Some interfaces need to perform both forward and abnormal verification, while some interfaces may only need to perform forward verification during automation, without exception verification.

Let's analyze the weather query interface of the example. There are two main test points:

  • Forward request: Enter an existing city and find the weather of the corresponding city
  • Abnormal request: Enter a city that does not exist, prompting an error

Second, write test cases

When writing test cases, we need to encapsulate the code, which can be encapsulated into test classes/methods and test functions. There are requirements for the naming method of use case encapsulation in pytest. For details, please refer to my previous article pytest test naming rules.

As for encapsulation into a class or a function, there are no specific requirements. Generally, interfaces related to the same scene or the same test point can be defined as a class.

At the same time, the use case also needs to set assertions to verify whether the returned content is the expected content. The test case must be asserted, otherwise it is meaningless.

construct request data

Forward request, the data is as follows:

params = {
	"query": "浙江杭州天气",
    "srcid": 4982
}

Abnormal request, the data is as follows:

params = {
	"query": "微信公众号:测试上分之路",
    "srcid": 4982
}

We have already obtained the result of the forward request when we debugged and requested the interface above, as shown in the screenshot above.

Let's take a look at the result of the abnormal request to prepare for the subsequent setting of the assertion. The results are as follows:

After sending the abnormal request, the returned code is also 200, and the result will show that the city has not been opened yet, and there is no window.tplData content in the forward request.

Package test code

Here are two different test cases for the same interface. We directly encapsulate a test class specifically for testing this interface. The sample code is as follows:

class TestWeather:
    '''
    校验百度天气查询接口:https://weathernew.pae.baidu.com/weathernew/pc
    '''

    def test_get_weather_normal(self):
        '''正向校验-查询存在的城市的天气'''
        url = "https://weathernew.pae.baidu.com/weathernew/pc"
        params = {
            "query": "浙江杭州天气",
            "srcid": 4982
        }
        res = requests.get(url=url, params=params)


    def test_get_weather_error(self):
        '''异常校验-查询不存在的城市的天气'''
        url = "https://weathernew.pae.baidu.com/weathernew/pc"
        params = {
            "query": "微信公众号:测试上分之路",
            "srcid": 4982
        }
        res = requests.get(url=url, params=params)

Note that there are no assertions in the code and it is not a complete use case. Here I just put the assertion in the next step to illustrate the process, and then write the assertion after analysis.

Assertion settings

Assertion, that is, whether the verification result is what we expect. For how pytest asserts, please refer to the article pytest-assertion.

When setting assertions, we need to specify which fields to check first. Generally speaking, the code of the interface response needs to be asserted, and status_code == 200 indicates that the interface request is passed. Then go to assert other necessary fields to verify whether the interface function is implemented.

From the above results, we can see that the forward request can make the following assertions:

# 断言code是否等于200,存在则该断言通过
assert res.status_code == 200

# 断言结果中是否存在"window.tplData",存在则该断言通过
assert "window.tplData" in res.text

From the above results, we can see that abnormal requests can make the following assertions:

# 断言code是否等于200,存在则该断言通过
assert res.status_code == 200

# 断言结果中是否存在"window.tplData",注意这里是不存在则该断言通过
assert "window.tplData" not in res.text

# 断言结果中是否存在"暂未开通此城市查询",存在则该断言通过
assert "暂未开通此城市查询" in res.text

3. Execute the script to obtain the test results

When using the pytest framework to manage and execute use cases, you need to install pytest first and import it in the module. Students who are not clear can check my pytest series of articles, and I won’t explain too much here.

The complete sample code is as follows:

# @time: 2022-03-20
# @author: 给你一页白纸
# 微信公众号:测试上分之路

import requests
import pytest


class TestWeather:
    '''
    校验百度天气查询接口:https://weathernew.pae.baidu.com/weathernew/pc
    '''

    def test_get_weather_normal(self):
        '''正向校验-查询存在的城市的天气'''
        url = "https://weathernew.pae.baidu.com/weathernew/pc"
        params = {
            "query": "浙江杭州天气",
            "srcid": 4982
        }
        res = requests.get(url=url, params=params)
        # print(res.status_code)
        # print(res.text)
        assert res.status_code == 200
        assert "window.tplData" in res.text


    def test_get_weather_error(self):
        '''异常校验-查询不存在的城市的天气'''
        url = "https://weathernew.pae.baidu.com/weathernew/pc"
        params = {
            "query": "微信公众号:测试上分之路",
            "srcid": 4982
        }
        res = requests.get(url=url, params=params)
        print(res.status_code)
        print(res.text)
        assert res.status_code == 200
        assert "window.tplData" not in res.text
        assert "暂未开通此城市查询" in res.text

        
if __name__ == '__main__':
    # 使用pytest执行用例
    pytest.main()

Of course, because the url is shared here, we'd better extract it instead of defining this variable once for each test method, as shown in the following figure:

The execution results are as follows:

Four, summary

For a single interface automation test case, we can follow the above steps, that is, specify the test object --> write the test case --> write the test script --> execute the script and obtain the test result. Through these steps, we have a basic idea for the writing of automated use cases (this is very important for the formation of our automated testing thinking), laying the foundation for our subsequent learning and practice.

In fact, when using a programming language to automate the testing of a project, it is almost impossible to have only one test case, so how to manage the use cases, execute the use cases, and obtain the test results when there are multiple test cases? This is the problem that unit testing frameworks need to solve.

Guess you like

Origin blog.csdn.net/MXB_1220/article/details/131731482