Comparison of mainstream interface testing frameworks, which one is better to use

The company plans to systematically carry out interface automation testing, so I need to investigate the mainstream interface testing frameworks and introduce the specificity and usage of each framework to colleagues in back-end testing (mainly testing interfaces). Back-end colleagues put forward requirements based on the characteristics of their interfaces to see which framework is more suitable for us.

need


1. The interface is easy to write.
2. Convenient debugging interface.
3. Support data initialization.
4. Generate a test report.
5. Support parameterization.


### robot framework


advantage

  • Keyword driven, custom user keywords.

  • Supports test log and report generation.

  • Support system keyword development, good scalability.

  • Support database operations.

shortcoming

  • Interface test cases are not concise to write.

  • Specific grammar is required.

*** Settings ***
Library    RequestsLibrary
Library    Collections

*** Test Cases ***
test_get_event_list    # 查询发布会(GET请求)
    ${payload}=    Create Dictionary    eid=1
    Create Session    event    http://127.0.0.1:8000/api
    ${r}=    Get Request    event    /get_event_list/    params=${payload}
    Should Be Equal As Strings    ${r.status_code}    200
    log    ${r.json()}
    ${dict}    Set variable    ${r.json()}
    #断言结果
    ${msg}    Get From Dictionary    ${dict}   message
    Should Be Equal    ${msg}    success
    ${sta}    Get From Dictionary    ${dict}    status
    ${status}    Evaluate    int(200)
    Should Be Equal    ${sta}    ${status}

Result: Don't think about it, no one wants to write interface use cases like this.


###JMeter


advantage

  • Support parameterization

  • No need to write code

shortcoming

  • Creating interface use cases is not efficient.

  • It is not possible to generate a test report to view the implementation of each interface.

Summary: Don't think about it, it is inconvenient to write the interface, the most important thing is that you can't generate a test report, if you want to do interface performance, you can consider it.


###HttpRunner


advantage:

  • Based on the YAML/JSON format, it focuses on writing the interface itself.

  • Simple interface writing

  • Generate test report

  • Interface recording function.

shortcoming:

  • There is no editor plug-in to check the grammar, which is easy to make mistakes.

  • The official documentation does not have a detailed description.

  • Expansion is inconvenient.

[
  {
    "config": {
      "name": "testcase description",
      "variables": [],
      "request": {
        "base_url": "http://127.0.0.1:5000",
        "headers": {
          "User-Agent": "python-requests/2.18.4"
        }
      }
    }
  },
  {
    "test": {
      "name": "test case name",
      "request": {
        "url": "/api/get-token",
        "headers": {
          "device_sn": "FwgRiO7CNA50DSU",
          "user_agent": "iOS/10.3",
          "os_platform": "ios",
          "app_version": "2.8.6",
          "Content-Type": "application/json"
        },
        "method": "POST",
        "date": {"sign": "958a05393efef0ac7c0fb80a7eac45e24fd40c27"}
      },
      "validate": [
        {"eq": ["status_code", 200]},
        {"eq": ["headers.Content-Type", "application/json"]},
        {"eq": ["content.success", true]},
        {"eq": ["content.token", "baNLX1zhFYP11Seb"]}
      ]
    }
  }]

Summary: It can be considered that the initialization of interface data may need to be processed separately.


###gauge


BDD Behavior Driven Testing Framework.

advantage:

  • Behavior files are separated from script files, which is essentially data-driven.

  • It is powerful and flexible, and essentially uses Python to write interface use cases.

  • Automatically generate test reports.

  • VS Code has support plugins

shortcoming:

  • The threshold is slightly high, requiring an understanding of the usage of BDD.

  • Need to be able to markdworn grammar

Behavior description file:

## test post request

* post "http://httpbin.org/post" interface     
     |key  | status_code|     
     |------|-----------|     
     |value1|200        |     
     |value2|200        |     
     |value3|200        |

Test script:

……

@step("post <url> interface <table>")
def test_get_request(url, table):
    values = []
    status_codes = []
    for word in table.get_column_values_with_name("key"):
        values.append(word)
    for word in table.get_column_values_with_name("status_code"):
        status_codes.append(word)
    for i in range(len(values)):
        r = requests.post(url, data={"key": values[i]})
        result = r.json()
        assert r.status_code == int(status_codes[i])

Summary: It is recommended to use, BDD has a certain threshold, it depends on the learning ability and acceptance speed of the testers.


###Unittest+Request+HTMLRunner


Leverage existing frameworks and libraries to customize it yourself.

advantage:

  • Flexible and powerful enough: layered testing, data-driven, test reporting, integrated CI...

shortcoming:

  • There is a certain learning cost

data file:

{
    "test_case1": {
        "key": "value1",
        "status_code": 200
    },
    "test_case2": {
        "key": "value2",
        "status_code": 200
    },
    "test_case3": {
        "key": "value3",
        "status_code": 200
    },
    "test_case4": {
        "key": "value4",
        "status_code": 200
    }}

Test case:

import requests
import unittest
from ddt import ddt, file_data


@ddtclass InterfaceTest(unittest.TestCase):

    def setUp(self):
        self.url = "http://httpbin.org/post"

    def tearDown(self):
        print(self.result)

    @file_data("./data/test_data_dict.json")
    def test_post_request(self, key, status_code):
        r = requests.post(self.url, data={"key": key})
        self.result = r.json()
        self.assertEqual(r.status_code, status_code)

Summary: It is recommended to use, the code is relatively simple, and the function is flexible enough.


我花了两天时间整理这些框架,其实重点就是了解HttpRunner 和 gauge 。
yg
HttpRunner 没有编辑器插件,本身就是一个YAML/JSON配置文件,所以配置写错了,但只要是合法的YAML/JSON格式,也看不出来,只有运行的过后才知道。就像你用记事本写代码一样,只有运行了才知道代码有没有写错。



另外,扩展起来也不是特别方便,单独用python实现一些函数:在json文件中

```{"device_sn": "${gen_random_string(15)}"}```

以这样的方式引用```gen_random_string()``` 函数。

gauge我已经分享过两篇基础文章了,虽然用BDD拿来做接口理念不搭,但并不是不可以,唯一的缺点是用BDD来描述接口行为不合适,其他的都没毛病,可以参数化,断言写起来也简单,测试报告也漂亮,本质上还是用Python实现一些功能,所以非常灵活。

unittest + requests + HTMLTestRunner是我最熟悉的方案,几乎没什么短板。以前通过这种方案写过很多测试用例,这次把ddt加上似乎更完美了。

Guess you like

Origin blog.csdn.net/m0_68405758/article/details/130047981