In the near future, I plan to give priority to the coverage of interface testing. For this, we need to develop a testing framework. After thinking, I still want to do something different this time.
- Interface testing is more efficient. Testers hope to get feedback on the results soon. However, the number of interfaces is generally large and will increase. Therefore, it is necessary to improve execution efficiency.
- In fact, the use cases of interface testing can also be used for simple stress testing, and stress testing requires concurrency
- There are many repetitive things in the use cases of interface testing. Testers should only pay attention to the design of interface testing. It is best to automate these repetitive
tasks. Pytest and allure are so easy to use. The new framework should integrate them. - Use cases for interface testing should be as concise as possible, preferably using yaml, so that the data can be directly mapped to request data. Writing a use case is the same as filling in the blanks. It is easy to promote to members who have no experience in automation and I am very impressed with the Python coroutine. Interest, I have studied for a period of time, and always hope to apply what I have learned, so I decided to use aiohttp to implement http requests. But pytest does not support event loops, and it takes some effort if you want to combine them. So I continued thinking, and the result of thinking is that I can actually divide the whole thing into two parts. The first part is to read yaml test cases, http request test interfaces, and collect test data. The second part is to dynamically generate test cases approved by pytest based on the test data, and then execute them to generate test reports. In this way, the two can be perfectly combined, and it perfectly fits my vision. The idea is established, and then it is realized.
The first part (the whole process is required to be asynchronous and non-blocking)
Read yaml test case
I designed a simple use case template like this. The advantage of this is that the parameter name and aiohttp.ClientSession().request(method,url,**kwargs) are directly corresponding, and I can directly pass it to The request method avoids various conversions, is concise, elegant, and expressive.
args:
- post
- /xxx/add
kwargs:
-
caseName: 新增xxx
data:
name: ${gen_uid(10)}
validator:
-
json:
successed: True
Aiofiles, a third-party library, can be used to read files asynchronously. yaml_load is a coroutine that can ensure that the main process is not blocked when reading yaml test cases. The data of the test cases can be obtained through await yaml_load()
async def yaml_load(dir='', file=''):
"""
异步读取yaml文件,并转义其中的特殊值
:param file:
:return:
"""
if dir:
file = os.path.join(dir, file)
async with aiofiles.open(file, 'r', encoding='utf-8', errors='ignore') as f:
data = await f.read()
data = yaml.load(data)
# 匹配函数调用形式的语法
pattern_function = re.compile(r'^\${([A-Za-z_]+\w*\(.*\))}$')
pattern_function2 = re.compile(r'^\${(.*)}$')
# 匹配取默认值的语法
pattern_function3 = re.compile(r'^\$\((.*)\)$')
def my_iter(data):
"""
递归测试用例,根据不同数据类型做相应处理,将模板语法转化为正常值
:param data:
:return:
"""
if isinstance(data, (list, tuple)):
for index, _data in enumerate(data):
data[index] = my_iter(_data) or _data
elif isinstance(data, dict):
for k, v in data.items():
data[k] = my_iter(v) or v
elif isinstance(data, (str, bytes)):
m = pattern_function.match(data)
if not m:
m = pattern_function2.match(data)
if m:
return eval(m.group(1))
if not m:
m = pattern_function3.match(data)
if m:
K, k = m.group(1).split(':')
return bxmat.default_values.get(K).get(k)
return data
my_iter(data)
return BXMDict(data)
As you can see, test cases also support certain template syntax, such as ${function}, $(a:b), etc., which can greatly expand the tester’s ability to write use cases
http请求测试接口
HTTP requests can be directly used aiohttp.ClientSession().request(method,url,**kwargs), http is also a coroutine, which can ensure that network requests are not blocked, and interface test data can be obtained through await http()
async def http(domain, *args, **kwargs):
"""
http请求处理器
:param domain: 服务地址
:param args:
:param kwargs:
:return:
"""
method, api = args
arguments = kwargs.get('data') or kwargs.get('params') or kwargs.get('json') or {}
# kwargs中加入token
kwargs.setdefault('headers', {}).update({'token': bxmat.token})
# 拼接服务地址和api
url = ''.join([domain, api])
async with ClientSession() as session:
async with session.request(method, url, **kwargs) as response:
res = await response_handler(response)
return {
'response': res,
'url': url,
'arguments': arguments
}
Collect test data
The concurrency of the coroutine is really fast. In order to avoid the service response to cause the fuse, you can introduce asyncio.Semaphore(num) to control the concurrency
async def entrace(test_cases, loop, semaphore=None):
"""
http执行入口
:param test_cases:
:param semaphore:
:return:
"""
res = BXMDict()
# 在CookieJar的update_cookies方法中,如果unsafe=False并且访问的是IP地址,客户端是不会更新cookie信息
# 这就导致session不能正确处理登录态的问题
# 所以这里使用的cookie_jar参数使用手动生成的CookieJar对象,并将其unsafe设置为True
async with ClientSession(loop=loop, cookie_jar=CookieJar(unsafe=True), headers={'token': bxmat.token}) as session:
await advertise_cms_login(session)
if semaphore:
async with semaphore:
for test_case in test_cases:
data = await one(session, case_name=test_case)
res.setdefault(data.pop('case_dir'), BXMList()).append(data)
else:
for test_case in test_cases:
data = await one(session, case_name=test_case)
res.setdefault(data.pop('case_dir'), BXMList()).append(data)
return res
async def one(session, case_dir='', case_name=''):
"""
一份测试用例执行的全过程,包括读取.yml测试用例,执行http请求,返回请求结果
所有操作都是异步非阻塞的
:param session: session会话
:param case_dir: 用例目录
:param case_name: 用例名称
:return:
"""
project_name = case_name.split(os.sep)[1]
domain = bxmat.url.get(project_name)
test_data = await yaml_load(dir=case_dir, file=case_name)
result = BXMDict({
'case_dir': os.path.dirname(case_name),
'api': test_data.args[1].replace('/', '_'),
})
if isinstance(test_data.kwargs, list):
for index, each_data in enumerate(test_data.kwargs):
step_name = each_data.pop('caseName')
r = await http(session, domain, *test_data.args, **each_data)
r.update({'case_name': step_name})
result.setdefault('responses', BXMList()).append({
'response': r,
'validator': test_data.validator[index]
})
else:
step_name = test_data.kwargs.pop('caseName')
r = await http(session, domain, *test_data.args, **test_data.kwargs)
r.update({'case_name': step_name})
result.setdefault('responses', BXMList()).append({
'response': r,
'validator': test_data.validator
})
return result
The event loop is responsible for executing the coroutine and returning the results. In the final result collection, I used the test case directory to classify the results, which laid a good foundation for the subsequent automatic generation of test cases recognized by pytest
def main(test_cases):
"""
事件循环主函数,负责所有接口请求的执行
:param test_cases:
:return:
"""
loop = asyncio.get_event_loop()
semaphore = asyncio.Semaphore(bxmat.semaphore)
# 需要处理的任务
# tasks = [asyncio.ensure_future(one(case_name=test_case, semaphore=semaphore)) for test_case in test_cases]
task = loop.create_task(entrace(test_cases, loop, semaphore))
# 将协程注册到事件循环,并启动事件循环
try:
# loop.run_until_complete(asyncio.gather(*tasks))
loop.run_until_complete(task)
finally:
loop.close()
return task.result()
the second part
Dynamically generate test cases approved by pytest
First, explain the operating mechanism of pytest. pytest will first find the conftest.py file in the current directory. If it finds it, run it first, and then find the .py file at the beginning or end of test in the specified directory according to the command line parameters. If found, if found, then analyze the fixture, if there is session or module type, and the parameter autotest=True or marked pytest.mark.usefixtures(a...), run them first; then go to find the class and method in turn The rules are similar. Probably such a process.
It can be seen that the key to running the pytest test is that there must be at least one testxx.py file recognized by the pytest discovery mechanism, the file contains the TestxxClass class, and the class has at least one def testxx(self) method.
There is no test file recognized by pytest, so my idea is to create a guided test file first, which is responsible for making pytest move. You can use pytest.skip() to skip the test method. Then our goal is how to dynamically generate use cases after pytest is activated, and then discover these use cases, execute these use cases, and generate test reports in one go.
# test_bootstrap.py
import pytest
class TestStarter(object):
def test_start(self):
pytest.skip('此为测试启动方法, 不执行')
What I think of is through fixtures, because fixtures have the ability to setup, so by defining a fixture whose scope is session, and then marking use on TestStarter, I can pre-process some things before importing TestStarter, then I will generate use case operations Put it in this fixture to complete the goal.
# test_bootstrap.py
import pytest
@pytest.mark.usefixtures('te', 'test_cases')
class TestStarter(object):
def test_start(self):
pytest.skip('此为测试启动方法, 不执行')
pytest has a --rootdir parameter. The core purpose of this fixture is to get the target directory through --rootdir, find out the .yml test files in it, and get the test data after running, and then create a testxx.py for each directory The test file, the content of the file is the content of the content variable, and then these parameters are passed to the pytest.main() method to execute the test of the test case, that is, another pytest is run inside pytest! Finally, delete the generated test file. Note that the fixture should be defined in conftest.py, because pytest has the ability to self-discover the content defined in conftest and does not require additional import.
# conftest.py
@pytest.fixture(scope='session')
def test_cases(request):
"""
测试用例生成处理
:param request:
:return:
"""
var = request.config.getoption("--rootdir")
test_file = request.config.getoption("--tf")
env = request.config.getoption("--te")
cases = []
if test_file:
cases = [test_file]
else:
if os.path.isdir(var):
for root, dirs, files in os.walk(var):
if re.match(r'\w+', root):
if files:
cases.extend([os.path.join(root, file) for file in files if file.endswith('yml')])
data = main(cases)
content = """
import allure
from conftest import CaseMetaClass
@allure.feature('{}接口测试({}项目)')
class Test{}API(object, metaclass=CaseMetaClass):
test_cases_data = {}
"""
test_cases_files = []
if os.path.isdir(var):
for root, dirs, files in os.walk(var):
if not ('.' in root or '__' in root):
if files:
case_name = os.path.basename(root)
project_name = os.path.basename(os.path.dirname(root))
test_case_file = os.path.join(root, 'test_{}.py'.format(case_name))
with open(test_case_file, 'w', encoding='utf-8') as fw:
fw.write(content.format(case_name, project_name, case_name.title(), data.get(root)))
test_cases_files.append(test_case_file)
if test_file:
temp = os.path.dirname(test_file)
py_file = os.path.join(temp, 'test_{}.py'.format(os.path.basename(temp)))
else:
py_file = var
pytest.main([
'-v',
py_file,
'--alluredir',
'report',
'--te',
env,
'--capture',
'no',
'--disable-warnings',
])
for file in test_cases_files:
os.remove(file)
return test_cases_files
As you can see, there is a TestxxAPI class in the test file, which has only one test_cases_data attribute and no testxx method, so it is not a test case recognized by pytest, and it cannot run at all. So how does it solve this problem? The answer is CaseMetaClass.
function_express = """
def {}(self, response, validata):
with allure.step(response.pop('case_name')):
validator(response,validata)"""
class CaseMetaClass(type):
"""
根据接口调用的结果自动生成测试用例
"""
def __new__(cls, name, bases, attrs):
test_cases_data = attrs.pop('test_cases_data')
for each in test_cases_data:
api = each.pop('api')
function_name = 'test' + api
test_data = [tuple(x.values()) for x in each.get('responses')]
function = gen_function(function_express.format(function_name),
namespace={'validator': validator, 'allure': allure})
# 集成allure
story_function = allure.story('{}'.format(api.replace('_', '/')))(function)
attrs[function_name] = pytest.mark.parametrize('response,validata', test_data)(story_function)
return super().__new__(cls, name, bases, attrs)
CaseMetaClass is a metaclass. It reads the content of the test_cases_data attribute, and then dynamically generates method objects. Each interface is a single method. After being decorated by the fine-grained test report function of allure and the parameterized test function provided by pytest, Assign the method object to the class attribute of test+api, that is to say, TestxxAPI has several testxx methods after it is generated. At this time, when pytest is run internally, pytest can also find these use cases and execute them.
def gen_function(function_express, namespace={}):
"""
动态生成函数对象, 函数作用域默认设置为builtins.__dict__,并合并namespace的变量
:param function_express: 函数表达式,示例 'def foobar(): return "foobar"'
:return:
"""
builtins.__dict__.update(namespace)
module_code = compile(function_express, '', 'exec')
function_code = [c for c in module_code.co_consts if isinstance(c, types.CodeType)][0]
return types.FunctionType(function_code, builtins.__dict__)
In the method of generating an object to be noted that the problem namespace, preferably the default pass the builtins. Dict , then pass into the method by custom namespace parameters.
Follow-up (yml test file is automatically generated)
At this point, the core functions of the framework have been completed. After several projects, the effect has completely exceeded expectations. Don’t be too cool to write use cases, don’t run too fast, and the test report is clear and beautiful, but I I'm still a little tired, why?
My current process of interface testing is, if the project integrates swagger, use swagger to obtain interface information, and manually create use cases for the project based on the interface information. This process is very repetitive and cumbersome, because our use case template has been roughly fixed, and the practical examples are the differences between some parameters such as directory, use case name, method, etc., then I think this process can be fully automated.
Because swagger has a web page, I can extract key information to automatically create .yml test files, just like setting up a shelf. After the project shelf is generated, I can go to the design case to fill in the parameters.
So I tried to parse the HTML from the homepage of the swagger request, and I was disappointed that there was no actual data. Later I guessed that ajax was used. When I opened the browser console, I found the api-docs request. It is json data, then the problem is simple, and web page analysis is unnecessary.
import re
import os
import sys
from requests import Session
template ="""
args:
- {method}
- {api}
kwargs:
-
caseName: {caseName}
{data_or_params}:
{data}
validator:
-
json:
successed: True
"""
def auto_gen_cases(swagger_url, project_name):
"""
根据swagger返回的json数据自动生成yml测试用例模板
:param swagger_url:
:param project_name:
:return:
"""
res = Session().request('get', swagger_url).json()
data = res.get('paths')
workspace = os.getcwd()
project_ = os.path.join(workspace, project_name)
if not os.path.exists(project_):
os.mkdir(project_)
for k, v in data.items():
pa_res = re.split(r'[/]+', k)
dir, *file = pa_res[1:]
if file:
file = ''.join([x.title() for x in file])
else:
file = dir
file += '.yml'
dirs = os.path.join(project_, dir)
if not os.path.exists(dirs):
os.mkdir(dirs)
os.chdir(dirs)
if len(v) > 1:
v = {'post': v.get('post')}
for _k, _v in v.items():
method = _k
api = k
caseName = _v.get('description')
data_or_params = 'params' if method == 'get' else 'data'
parameters = _v.get('parameters')
data_s = ''
try:
for each in parameters:
data_s += each.get('name')
data_s += ': \n'
data_s += ' ' * 8
except TypeError:
data_s += '{}'
file_ = os.path.join(dirs, file)
with open(file_, 'w', encoding='utf-8') as fw:
fw.write(template.format(
method=method,
api=api,
caseName=caseName,
data_or_params=data_or_params,
data=data_s
))
os.chdir(project_)
Now I want to start the interface test coverage of a project. As long as the project integrates swagger, the project shelf can be generated in seconds. Testers only need to concentrate on designing interface test cases. I think it is very meaningful for the test team to promote and use. It is also more convenient for lazy people like me.