Pytest and Allure testing framework-super detailed version + practical combat

Framework introduction

Using the latest pytest unit test, the framework is the most powerful, the latest and most popular in the framework industry (number one in gitee search), the framework is universal and 95% of companies and enterprises have implemented it directly, and the ten thousand year old Taoist gives in-depth explanations, etc., taking you to the pinnacle of the interface automation king Road
framework structure: Python+requests+pytest+log+allure+yaml+mysql+git+jenkins+DingTalk group sending report

Solve pain points:
Single interface and multi-interface encapsulation are merged into
yaml to handle multi-interface dynamic parameter transfer.
Multiple format request body encapsulation.
One-click environment switching.
Login token and session global call processing.
Multi-role API flexible switching
token and session timeout processing.
Data cleaning (interface) or SQL)
framework is common to 95% of companies
and enterprises directly use it.

The most important thing: you only need to encapsulate the API once, and leave the rest of the work to functional tests or people who don’t understand code.

1: Introduction, installation, framework structure and execution method of pytest

**

1. Features

: 1. Simple, flexible and easy to use; supports parameterization; skip and xfail processing of test cases;
2. Can support simple unit tests and complex functional tests, and can also be used for automated tests such as selenium/appium and interface automated tests (pytest+requests);
3. pytest has many third-party plug-ins, and can be customized and extended. The more useful ones are pytest-allure (perfect html test report generation) pytest-xdist (multi-CPU distribution), etc.; 4. It can be
easily Good integration with jenkins;**
5.
**

2. Installation

: pytest installation, import related dependency libraries
Pip install –U pytest U means upgrade
Pip install sugar
pip install pytest-rerunfailures
Pip install pytest-xdist
Pip install pytest-assume
Pip intall pytest-html…
Pip list to view
Pytest –h help**

3. pytest framework structure

Pytest framework structure
Import pytest is similar to setup, teardown is also more flexible, and there is also session()

Module level (setup_module/
teardown_module) Functions not in the class are useful at
the function level (setup_function/
teardown_function) Functions not in the class are useful at
the class level (setup_class/
teardown_class). They are run only once before and after the class.
Method level (setup_method/
teardown_methond) runs the whole method in the class

4. Execution method

The execution method of pytest is
Pytest/py.test (terminal, command line, pycharm can be configured to execute pytest mode)

  1. Pytest –v (highest level information—verbose)
  2. pytest -v -s filename 3.Pytest-q (silent)
    (output printing)
    multiple execution methods 1.pytest will run all files in the form of
    test_*.py or *test.py in the current directory and its subdirectories . 2. Functions starting with test_, classes starting with Test, and methods starting with test_. All packages must have an __init_.py file. 3.Pytest can execute use cases and methods written by the unittest framework

2: Pytest - Assert, Skip and Run

1. Pytest - assert, skip and run

 

2. skip in mark (skip)

 

3. xfail in mark (failure)

pytest.xfail()
We have learned that if we skip executing test cases, one of the methods is to use the pytest.skip() method in the test function. The pytest.xfail() we are going to learn now is somewhat similar to pytest.skip(), except that its meaning is: mark the use case as xfail failure, and the subsequent code in the use case will not be executed.

Old rules, follow the litchi: When we call the pytes.xfail() method in the test case, we can choose to pass in the reason parameter to indicate the reason.

#test_Pytest.py文件
#coding=utf-8

import pytest

class Test_Pytest():

        def test_one(self,):
                print("----start------")
                pytest.xfail(reason='该功能尚未完成')
                print("test_one方法执行" )
                assert 1==1

        def test_two(self):
                print("test_two方法执行" )
                assert "o" in "love"

        def test_three(self):
                print("test_three方法执行" )
                assert 3-2==1

if __name__=="__main__":
    pytest.main(['-s','-r','test_Pytest.py','test_Pytest.py'])

 The running results are as shown below: We can see that the code before the pytest.

This method allows us to directly mark the use case as failed. So under what circumstances would we do this? The function is not completed and there are known problems. In addition, the execution of the use case requires preconditions or operations. If the preconditions or operations fail, then we can directly set the use case to fail, which is xfail.

@pytest.mark.xfail
In addition to the pytest.xfail() learned above, there is another way to use xfai. It is the @pytest.mark.xfail tag. Its meaning is that the test case is expected to fail, but it will not affect the execution of the test case. If the test case fails to execute, the result is xfail (no additional error message is displayed); if the test case executes successfully, the result is xpass.

Eat a lychee: We directly add the @pytest.mark.xfail tag to the test case.

#test_Pytest.py文件
#coding=utf-8

import pytest

class Test_Pytest():

        @pytest.mark.xfail
        def test_one(self):
                print("test_one方法执行" )
                assert 1==2

        def test_two(self):
                print("test_two方法执行" )
                assert "o" in "love"

        def test_three(self):
                print("test_three方法执行" )
                assert 3-2==1

if __name__=="__main__":
    pytest.main(['-s','test_Pytest.py']) 

The running result is as shown below: you can see that the use case we marked did run; because the assertion failed, the result is xfailed, and the error case and specific information are not displayed as normal.

We changed the assertion to be correct and ran it again. The result is as follows: Although our use case passed normally, it was still marked as xpassed instead of passed.

 4. Use custom markers to only execute part of the use cases.

1. mark
the following use case, mark test_send_http() as webtest

# content of test_server.py

import pytest

@pytest.mark.webtest
def test_send_http():
    pass # perform some webtest test for your app

def test_something_quick():
    pass

def test_another():
    pass

class TestClass:
    def test_method(self):
        pass

if __name__ == "__main__":
    pytest.main(["-s", "test_server.py", "-m=webtest"])
只运行用webtest标记的测试,cmd运行的时候,加个-m 参数,指定参数值webtest

```py
pytest -v -m webtest

 If you don't want to execute the test case marked webtest, use "not webtest"

pytest -v -m “not webtest”
import pytest

@pytest.mark.webtest
def test_send_http():
    pass # perform some webtest test for your app
def test_something_quick():
    pass
def test_another():
    pass
class TestClass:
    def test_method(self):
        pass

if __name__ == "__main__":
    pytest.main(["-s", "test_server.py", "-m='not webtest'"])

5. File name class name method executes some use cases

2. -v specifies the function node id
if you want to specify a use case in a class under a certain .py module, such as: testmethod use case in TestClass.
Each use case, function (or method) that starts with test (or ends with _test) The name is the node ID of the use case. Use the -v parameter to specify the node ID to run.

pytest -v test_server.py::TestClass::test_method

Of course, you can also choose to run the entire class

pytest -v test_server.py::TestClass

You can also select multiple nodes to run, and separate multiple nodes with spaces.

pytest -v test_server.py::TestClass test_server.py::test_send_http

6,-k combination call to execute some use cases

 

.-k Match case names
You can use the -k command line option to specify an expression to match case names.

pytest -v -k http

You can also run all tests and exclude certain cases based on their names:

pytest -k “not send_http” -v

You can also choose to match "http" and "quick" at the same time

pytest -k “http or quick” -v

3. Pytest -fixture

There are practical details below - fixtures are really awesome

 

 

Insert image description here


 


 

Insert image description here


Compared with unittest, the most leaping point of pytest is probably the fixture mechanism.

For unittest, setUp and tearDown need to be written in the class of each use case. That is what we call pre- and post-position,

Inevitably, the pre- and post-processing of many use cases are the same (for example, many use cases require login in the front and exit in the back), so we need to copy and paste repeatedly, which leads to an increase in the workload and the amount of code. The interface also seems cumbersome.

So at this time the fixture mechanism in pytest is about to make its debut .

In layman’s terms: fixture = front + post

The convenience is: if many use cases have the same prefix and postfix, then I can only implement one, and then just call the required use case.

1. Mechanism: Create a conftest.py file at the same level as the test case, or the parent of the test case.
2.In the conftest.py file: put all prefixes and postfixes. There is no need for the use case.py file to actively introduce the conftest file.
3. Define a function: including pre-operation + post-operation.
4. Declare the function as a fixture: add @pytest.fixture in front of the function (action level = default is function)
5. Definition of fixture.
If there is a return value, write it after yield. (The function of yield is equivalent to return)
In the test case, when calling a fixture function with a return value, the function name represents the return value.
In the test case, the function name can be used as a parameter of the test case.

1. As follows: Define a function named open_url that is prefixed and prefixed to the fixture. The prefix is ​​to open the link and the postfix is ​​to exit the browser.

@pytest.fixture(scope="class") #Define scope scope

def open_url():
    # 前置
    driver = webdriver.Chrome()
    driver.get(url) #url为链接地址
    yield driver    #yield之前代码是前置,之后的代码就是后置。
    # 后置
    driver.quit()

In this way we define a fixture called open_url

2. We use @pytest.mark.usefixtures (fixture function name) in front of the class we want to use.

You can directly call the pre and post defined above

You can see that in the TestLogin class we no longer need to write setup and teardown. We can just write our intermediate process directly. Isn’t it very convenient?

3. Advanced method: Define multiple fixtures in conftest. One fixture can be the prefix and prefix of another fixture. During this period, fields are still used to separate the prefix and prefix.

As shown in the picture above, you can see that my class also references a fixture named refresh_page. Go directly to the code:

# 刷新页面 - 定义的第二个fixture
@pytest.fixture
def refresh_page(open_url):
    yield
    open_url.refresh()

 

Open_url is directly used as a pre-reference of another fixture, separated by yield. After the open_url pre- and post-reference is executed in the use case, the refresh post-reference is executed again.
Execution sequence: code before open_url yield - use case code - code after open_url yield --> code after refresh_page yield
Isn't it wonderful? It can solve the trouble of many use case processes that are interlocking.

4. Speaking of the multiple fixture calls above, many people will wonder whether the fixtures will conflict with each other.

Of course not, the fixture has already determined its use case domain in conftest.py, and it will take the initiative to distinguish which use case domain your fixture is acting on.
First, let’s take a look at the definition of fixture function in the framework:

 

Scope is the scope that defines the use case domain:
function: the default scope, every function or method will be called, if not filled in, it will be
called class: called once for each class
module: called once for each .py file, there can be multiple files in the file Function and class
session: multiple files are called once, and can span files. For example, in a .py file, each .py file is the module
scope:
session > module > class > function

Therefore, the various fixtures will not conflict with each other when calling.

5. Automatic application of fixture autouse

Autouse calling example: **
When there are many management use cases, this method is more convenient and efficient, but you must be careful when using this function and be sure to pay attention to the scope of the fixture. It should be noted that when using this method, the function of the return value cannot be used. autouse is set to False by default. When the default is False, you can choose to use the above two methods to try out the fixture. When set to True, all tests will automatically call this fixture. autouse follows the scope="keyword parameter" rule: when scope="session", no matter how it is defined, it will only run once; when scope="module", each py file will only run once; when scope="class", Each class is run only once (but when a file includes function and class, it will be run once for each function (not in the class)); when scope="function", each function is run once; ''' is
normally
written The automated use case will write some prefixed fixture operations. If the use case needs to use it, just pass the parameter name of the function directly. When there are many use cases, it will be troublesome to pass this parameter every time.
There is a parameter autouse in the fixture, which is not enabled by Fasle by default. It can be set to True to enable automatic use of the fixture function, so that the use case does not need to pass parameters every time.

Set autouse=True.
Autouse is set to True, and the fixture function start is automatically
called. Set the scope to the module level. It is only executed once in the current .py use case module. autouse=True automatically uses [picture] open_home to set the scope to the function level. Before
each use case Call once and use automatically

import pytest

@pytest.fixture(scope="module",autouse=True)
def start(request):
    print("\n----开始执行module------")
    print('module : %s'% request.module.__name__)
    print('------启动浏览器-------')
    yield
    print("------结束测试 end!----------")

@pytest.fixture(scope="function",autouse=True)
def open_home(request):
    print("function:%s \n--回到首页--"% request.function.__name__)

def test_01():
    print('----用例01-----')

def test_02():
    print('----用例02-----')

if __name__ == '__main__':
    pytest.main(["-s","autouse.py"])

Results of the

----开始执行module------
module : autouse
------启动浏览器-------
function:test_01 
--回到首页--
.----用例01-----
function:test_02 
--回到首页--
.----用例02-----
------结束测试 end!----------

4. Parameterization and data-driven framework implementation

Parameterization 1

Insert image description here


 

Insert image description here


 

Insert image description here

Insert image description here

import pytest


@pytest.mark.parametrize('test_input,expected',[('3+5',8),
                         ('2-1',1),('7*5',30)])
def test_eval(test_input,expected):
    assert eval(test_input)==expected    ----eval把字符串转换成表达式


est_param.py::test_eval[2-1-1]
test_param.py::test_eval[7*5-30] PASSED [ 33%]PASSED [ 66%]FAILED [100%]
test_param.py:3 (test_eval[7*5-30])
35 != 30

Expected :30
Actual :35
<Click to see difference>

test_input = '7*5', expected = 30

@pytest.mark.parametrize('test_input,expected',[('3+5',8),
('2-1',1),('7*5',30)])
def test_eval(test_input,expected):
> assert eval(test_input)==expected
E assert 35 == 30          ----提示把30改成35

test_param.py:7: AssertionError

Assertion failed

 Parameterization 2

import pytest
test_user_data=['linda','sai','tom']
@pytest.fixture(scope='module')
def login(request):
    user=request.param
    print('打开首页登陆%s'%user)
    return user


#indirect=True是把login当作函数去执行
@pytest.mark.parametrize('login',test_user_data,indirect=True)
def test_cart(login):
    usera=login
    print('不同用户添加购物车%s'%usera)
    assert usera!=''

Process finished with exit code 0
打开首页登陆linda
PASSED [ 33%]不同用户添加购物车linda
打开首页登陆sai
PASSED [ 66%]不同用户添加购物车sai
打开首页登陆tom
PASSED [100%]不同用户添加购物车tom

Parameterization 3

import pytest
test_user_data=[
    {'user':'linda','password':'8888'},
    {'user':'servenruby','password':'123456'},
    {'user':'test01','password':''}
]

@pytest.fixture(scope='module')
def login_r(request):
    #可以通过dict形式,虽然传递一个参数,但通过key的方式可以达到累死传入多个参数的效果
    user=request.param['user']
    pwd=request.param['password']
    print('\n打开首页准备登陆,登陆用户%s,密码%s'%(user,pwd))
    if pwd:
        return True
    else:
        return False

#这是pytest参数化驱动,indeirect=True是把login_r当作函数去执行
@pytest.mark.parametrize('login_r',test_user_data,indirect=True)
def test_cart(login_r):
    #登陆用例
    a=login_r
    print('测试用例中login_r的返回值%s'%a)
    assert a,'失败原因,密码为空'

开首页准备登陆,登陆用户linda,密码8888
PASSED [ 33%]测试用例中login_r的返回值True

打开首页准备登陆,登陆用户servenruby,密码123456
PASSED [ 66%]测试用例中login_r的返回值True

打开首页准备登陆,登陆用户test01,密码
FAILED [100%]测试用例中login_r的返回值False


打开首页准备登陆,登陆用户linda,密码8888
PASSED [ 33%]测试用例中login_r的返回值True

打开首页准备登陆,登陆用户servenruby,密码123456
PASSED [ 66%]测试用例中login_r的返回值True

打开首页准备登陆,登陆用户test01,密码
FAILED [100%]测试用例中login_r的返回值False

test_mark_param_request2.py:19 (test_cart[login_r2])
login_r = False

@pytest.mark.parametrize('login_r',test_user_data,indirect=True)
def test_cart(login_r):
#登陆用例
a=login_r
print('测试用例中login_r的返回值%s'%a)
> assert a,'失败原因,密码为空'
E AssertionError: 失败原因,密码为空
E assert False

Parameterized 3*3

import pytest
test_user_data1=[{'user':'linda','password':'888888'},
                 {'user':'servenruby','password':'123456'},
                 {'user':'test01','password':''}]
test_user_data2=[{'q':'中国平安','count':3,'page':1},
                 {'q':'阿里巴巴','count':2,'page':2},
                 {'q':'pdd','count':3,'page':1}]
@pytest.fixture(scope='module')
def login_r(request):
    #这是接受不了输入的参数,接收一个参数
    user=request.param['user']
    pwd=request.param['password']
    print('\n用户名:%s,密码:%s'%(user,pwd))

@pytest.fixture(scope='module')
def query_param(request):
    q=request.param['q']
    count=request.param['count']
    page=request.param['page']
    print('查询的搜索词%s'%q)
    return request.param

#这是pytest的数据驱动,indeirect=True是把login_r当作函数去执行
#从下往上执行
#两个数据进行组合测试,有3*3个测试用例执行(test_user_data1的个数*test_user_data2的个数
@pytest.mark.parametrize('query_param',test_user_data2,indirect=True)
@pytest.mark.parametrize('login_r',test_user_data1,indirect=True)
def test_login(login_r,query_param):
    #登陆用例
    print(login_r)
    print(query_param)


pytest_mark_request3.py::test_login[login_r1-query_param0] ✓ 44% ████▌ 查询的搜索词pdd
None
{'q': 'pdd', 'count': 3, 'page': 1}

pytest_mark_request3.py::test_login[login_r1-query_param2] ✓ 56% █████▋
用户名:linda,密码:888888
None
{'q': 'pdd', 'count': 3, 'page': 1}

pytest_mark_request3.py::test_login[login_r0-query_param2] ✓ 67% ██████▋
用户名:test01,密码:
None
{'q': 'pdd', 'count': 3, 'page': 1}

pytest_mark_request3.py::test_login[login_r2-query_param2] ✓ 78% ███████▊ 查询的搜索词阿里巴巴
None
{'q': '阿里巴巴', 'count': 2, 'page': 2}

pytest_mark_request3.py::test_login[login_r2-query_param1] ✓ 89% ████████▉ 查询的搜索词中国平安
None
{'q': '中国平安', 'count': 3, 'page': 1}

pytest_mark_request3.py::test_login[login_r2-query_param0] ✓ 100% ██████████

5. Third-party plug-ins

1. Adjust the execution order of test cases

Insert image description here


Scenario: When the natural order of execution is not considered, or you want to change the execution order, for example, the use case for adding data must be executed first, and then the use case for deletion. Test cases are executed in name order by default.
• Solution:
• Installation: pip install pytest-ordering
• Add the following decorator to the test method
• @pytest.mark.last — the last one executed
• @pytest.mark.run(order=1) — which one executes
pytest by default Executed in alphabetical order

import pytest
@pytest.mark.run(order=1)
def test_01():
    print('test01')

@pytest.mark.run(order=2)
def test_02():
    print('test01')
@pytest.mark.last
def test_06():
    print('test01')

def test_04():
    print('test01')

def test_05():
    print('test01')
@pytest.mark.run(order=3)
def test_03():
    print('test01')

pytest_order.py::test_01 PASSED [ 16%]test01

pytest_order.py::test_02 PASSED [ 33%]test01

pytest_order.py::test_03 PASSED [ 50%]test01

pytest_order.py::test_04 PASSED [ 66%]test01

pytest_order.py::test_05 PASSED [ 83%]test01

pytest_order.py::test_06 PASSED [100%]test01

2. The execution of the use case stops when an error is encountered.

Insert image description here


• It can only be stopped after all normal executions are completed. If you want to stop the test when an error is encountered: -x; you can also stop the test when the number n of use case errors reaches the specified number: - - maxfail=n • Execution: • pytest
-x
- v -s filename.py ------- -x stops when encountering an error
• pytest -x -v -s filename.py —maxfail=2 ------- --maxfail=2 It stops when it encounters two errors.

3. Rerun the use case after it fails.

Insert image description here


**Scenario:
• After the test fails, it needs to be re-run n times, and a delay time must be added between re-runs, with an interval of n seconds before running again.
• Execution:
• Installation: pip install pytest-rerunfailures
• pytest -v - -reruns 5 --reruns-delay 1 — Wait 1 second each time and retry 5 times

4. Multiple assertions will still be executed after an error is reported in the front.

Insert image description here


pip3 install pytest-assume continues execution after asserting, but needs to modify the assertion**

@pytest.mark.parametrize(('x', 'y'), [(1, 1), (1, 0), (0, 1)])
def test_assume(x, y):
    pytest.assume(x == y)
    pytest.assume(3 == 4)
    pytest.assume(5 == 9)

5. Multi-thread parallel and distributed execution

Insert image description here


Scenario: There are 1,000 test cases, one test case takes 1 minute to execute, and it takes 1,000 minutes for one tester to execute. Usually we exchange labor costs for time costs. If we add a few people to execute it together, the time will be shortened
. If it only takes 100 minutes for 10 people to execute it together, this is a parallel testing and distributed scenario.
Solution: pytest distributed execution plug-in: pytest-xdist, multiple CPUs or hosts for execution.
Premise: Use cases are independent, there is no order, they can be executed randomly, and they can be run repeatedly without affecting other use cases.
Installation: Pip3 install pytest-xdist
• Multiple CPUs execute the use case in parallel, directly add -n 3 is the number of parallels: pytest -n 3 • Execute together under multiple terminals

import pytest
import time

@pytest.mark.parametrize('x',list(range(10)))
def test_somethins(x):
    time.sleep(1)

pytest -v -s -n 5 test_xsdist.py  ----一次执行5个

Run the following code, the project structure is as follows

web_conf_py is the project name

│  conftest.py
│  __init__.py
├─baidu
│  │  conftest.py
│  │  test_1_baidu.py
│  │  test_2.py
│  │  __init__.py 
├─blog
│  │  conftest.py
│  │  test_2_blog.py
│  │  __init__.py  

Code reference:

# web_conf_py/conftest.py
import pytest

@pytest.fixture(scope="session")
def start():
    print("\n打开首页")
    return "yoyo"

# web_conf_py/baidu/conftest.py
import pytest

@pytest.fixture(scope="session")
def open_baidu():
    print("打开百度页面_session")

# web_conf_py/baidu/test_1_baidu.py
import pytest
import time

def test_01(start, open_baidu):
    print("测试用例test_01")
    time.sleep(1)
    assert start == "yoyo"

def test_02(start, open_baidu):
    print("测试用例test_02")
    time.sleep(1)
    assert start == "yoyo"

if __name__ == "__main__":
    pytest.main(["-s", "test_1_baidu.py"])


# web_conf_py/baidu/test_2.py
import pytest
import time

def test_06(start, open_baidu):
    print("测试用例test_01")
    time.sleep(1)
    assert start == "yoyo"
def test_07(start, open_baidu):
    print("测试用例test_02")
    time.sleep(1)
    assert start == "yoyo"

if __name__ == "__main__":
    pytest.main(["-s", "test_2.py"])


# web_conf_py/blog/conftest.py
import pytest

@pytest.fixture(scope="function")
def open_blog():
    print("打开blog页面_function")

# web_conf_py/blog/test_2_blog.py

import pytest
import time
def test_03(start, open_blog):
    print("测试用例test_03")
    time.sleep(1)
    assert start == "yoyo"

def test_04(start, open_blog):
    print("测试用例test_04")
    time.sleep(1)
    assert start == "yoyo"

def test_05(start, open_blog):
    '''跨模块调用baidu模块下的conftest'''
    print("测试用例test_05,跨模块调用baidu")
    time.sleep(1)
    assert start == "yoyo"

if __name__ == "__main__":
    pytest.main(["-s", "test_2_blog.py"])

Normal operation takes time: 7.12 seconds

E:\YOYO\web_conf_py>pytest
============================= test session starts =============================
platform win32 -- Python 3.6.0, pytest-3.6.3, py-1.5.4, pluggy-0.6.0
rootdir: E:\YOYO\web_conf_py, inifile:
plugins: xdist-1.23.2, metadata-1.7.0, html-1.19.0, forked-0.2
collected 7 items

baidu\test_1_baidu.py ..                                                 [ 28%]
baidu\test_2.py ..                                                       [ 57%]
blog\test_2_blog.py ...                                                  [100%]

========================== 7 passed in 7.12 seconds ===========================

Set the number of parallel runs to 3, and the time consumed is 3.64 seconds, which greatly shortens the use case time.

E:\YOYO\web_conf_py>pytest -n 3
============================= test session starts =============================
platform win32 -- Python 3.6.0, pytest-3.6.3, py-1.5.4, pluggy-0.6.0
rootdir: E:\YOYO\web_conf_py, inifile:
plugins: xdist-1.23.2, metadata-1.7.0, html-1.19.0, forked-0.2
gw0 [7] / gw1 [7] / gw2 [7]
scheduling tests via LoadScheduling
.......                                                                  [100%]
========================== 7 passed in 3.64 seconds ===========================

6. Other interesting plug-ins

Insert image description here


I won’t go into details here. If you like it, you can research it yourself.

7. Use pytest to execute unittest test cases

Insert image description here


Executing unitest is the same as before. Try not to mix and use fancy ones. Just use whichever one you want. I won’t go into details here.

8. pytest-html generates reports

Insert image description here


pytest-HTML is a plug-in used by pytest to generate HTML reports of test results. Compatible with Python 2.7,3.6

pytest-html
1. Source code address on github [ github.com/pytest-dev/…

2.pip installation

$ pip install pytest-html

Insert image description here

3.Execution method

$ pytest --html=report.html

html report
1. Open cmd, cd to the directory where you need to execute the pytest use case, and execute the command: pytest --html=report.html

image


2. After execution, a report file report.html will be generated in the current directory, and the display effect is as follows

image


Specify the report path
1. Directly execute "pytest --html=report.html" and the report generated will be in the same path of the current script. If you want to specify the storage location of the report, put it in the report folder in the same directory of the current script.

pytest --html=./report/report.html

Insert image description here


2. If you want to specify a .py file use case or all use cases in a folder, you need to add a parameter. For specific rules, please refer to [pytest document 2-use case running rules]

Insert image description here


Independent display of reports
1. The css of the report generated by the above method is independent. When sharing the report, the style will be lost. In order to better share and display the report by email, you can merge the css style into html.

$ pytest --html=report.html --self-contained-html

Display Options
By default, all rows in the Results table are expanded except rows with Passed tests.

This behavior can be customized using query parameters: ?collapsed=Passed,XFailed,Skipped.

More functions
1. For more functions, check the official documentation [ github.com/pytest-dev/…

6. Log management and code coverage

1. Application of logging in pytest

Insert image description here

2. The meaning of logs and levels

Insert image description here


The debugging information of automated test cases is very useful. It allows us to know the current running status, which step of execution and corresponding error information, etc. It can be found in pytest. Sometimes not all information is output, such as the pass test by default. The use case has no print output. This article will introduce how to display all log information in real time in pytest.

1. Use print to output log information
slowTest_print.py

import time
 
def test_1():
    print 'test_1'
    time.sleep(1)
    print 'after 1 sec'
    time.sleep(1)
    print 'after 2 sec'
    time.sleep(1)
    print 'after 3 sec'
    assert 1, 'should pass'
 
def test_2():
    print 'in test_2'
    time.sleep(1)
    print 'after 1 sec'
    time.sleep(1)
    print 'after 2 sec'
    time.sleep(1)
    print 'after 3 sec'
    assert 0, 'failing for demo purposes'

When you run the above program, pytest will capture all output, save it until all test cases are executed, and only output information about failed test cases. For successful test cases, no print information is displayed.
From the running results below, if you need to check the running status of test_1(), there is no log information to see and the print is not displayed.

C:\Users\yatyang\PycharmProjects\pytest_example>pytest -v slowTest_print.py
============================= test session starts =============================
platform win32 -- Python 2.7.13, pytest-3.0.6, py-1.4.32, pluggy-0.4.0 -- C:\Python27\python.exe
cachedir: .cache
metadata: {'Python': '2.7.13', 'Platform': 'Windows-7-6.1.7601-SP1', 'Packages': {'py': '1.4.32', 'pytest': '3.0.6', 'pluggy': '0.4.0'}, 'JAVA_HOME': 'C:\\Program Files (x86)\\Java\\jd
k1.7.0_01', 'Plugins': {'html': '1.14.2', 'metadata': '1.3.0'}}
rootdir: C:\Users\yatyang\PycharmProjects\pytest_example, inifile:
plugins: metadata-1.3.0, html-1.14.2
collected 2 items 

slowTest_print.py::test_1 PASSED
slowTest_print.py::test_2 FAILED

================================== FAILURES ===================================
___________________________________ test_2 ____________________________________

    def test_2():
        print 'in test_2'
        time.sleep(1)
        print 'after 1 sec'
        time.sleep(1)
        print 'after 2 sec'
        time.sleep(1)
        print 'after 3 sec'
>       assert 0, 'failing for demo purposes'
E       AssertionError: failing for demo purposes
E       assert 0

slowTest_print.py:22: AssertionError
---------------------------- Captured stdout call -----------------------------
in test_2
after 1 sec
after 2 sec
after 3 sec
===================== 1 failed, 1 passed in 6.45 seconds ======================

C:\Users\yatyang\PycharmProjects\pytest_example>

We can use the '-s' parameter or '--capture=no', so that all test print information can be output. However, pytest will still wait for all test cases to be executed before displaying the results. You can see that test_1 below also displays print-related information.


C:\Users\yatyang\PycharmProjects\pytest_example>py.test --capture=no slowTest_print.py
============================= test session starts =============================
platform win32 -- Python 2.7.13, pytest-3.0.6, py-1.4.32, pluggy-0.4.0
metadata: {'Python': '2.7.13', 'Platform': 'Windows-7-6.1.7601-SP1', 'Packages': {'py': '1.4.32', 'pytest': '3.0.6', 'pluggy': '0.4.0'}, 'JAVA_HOME': 'C:\\Program Files (x86)\\Java\\jd
k1.7.0_01', 'Plugins': {'html': '1.14.2', 'metadata': '1.3.0'}}
rootdir: C:\Users\yatyang\PycharmProjects\pytest_example, inifile:
plugins: metadata-1.3.0, html-1.14.2
collected 2 items 

slowTest_print.py test_1
after 1 sec
after 2 sec
after 3 sec
.in test_2
after 1 sec
after 2 sec
after 3 sec
F

================================== FAILURES ===================================
___________________________________ test_2 ____________________________________

    def test_2():
        print 'in test_2'
        time.sleep(1)
        print 'after 1 sec'
        time.sleep(1)
        print 'after 2 sec'
        time.sleep(1)
        print 'after 3 sec'
>       assert 0, 'failing for demo purposes'
E       AssertionError: failing for demo purposes
E       assert 0

slowTest_print.py:22: AssertionError
===================== 1 failed, 1 passed in 6.17 seconds ======================

2. Python Logging usage
Generally, during the debugging process of some programs, we will let it output some information, especially some large programs. We can use this information to understand the running status of the program. Python provides a logging module, which We can save all the information we want to a log file for easy viewing.

import logging

logging.debug('This is debug message')
logging.info('This is info message')
logging.warning('This is warning message')

Print on the screen:
WARNING:root:This is warning message.
By default, logging prints the log to the screen, and the log level is WARNING; the log level
relationship is: CRITICAL > ERROR > WARNING > INFO > DEBUG > NOTSET, of course you can also do it yourself Define the log level.

3. Use logging instead of print in pytest.
Let’s now take a look at the difference between using logging output instead of print in the pytest test case.
slowTest_logging.py

import time
import logging

logging.basicConfig(level=logging.DEBUG)

def test_1():
    log = logging.getLogger('test_1')
    time.sleep(1)
    log.debug('after 1 sec')
    time.sleep(1)
    log.debug('after 2 sec')
    time.sleep(1)
    log.debug('after 3 sec')
    assert 1, 'should pass'


def test_2():
    log = logging.getLogger('test_2')
    time.sleep(1)
    log.debug('after 1 sec')
    time.sleep(1)
    log.debug('after 2 sec')
    time.sleep(1)
    log.debug('after 3 sec')
    assert 0, 'failing for demo purposes'

The running results are as follows. Is the display of log information more readable? However, pytest still has to wait until all the results are finished running before they are completely output to the screen, and there is no way to see the real-time running status. For example, if we want to test a new image now, we don't know how good the quality is. If there are a lot of test cases, the tester will have to wait. Maybe the execution can be stopped if some of the previous tests fail. So how to achieve real-time display? Please see method 4.

C:\Users\yatyang\PycharmProjects\pytest_example>pytest slowTest_logging.py
============================= test session starts =============================
platform win32 -- Python 2.7.13, pytest-3.0.6, py-1.4.32, pluggy-0.4.0
metadata: {'Python': '2.7.13', 'Platform': 'Windows-7-6.1.7601-SP1', 'Packages': {'py': '1.4.32', 'pytest': '3.0.6', 'pluggy': '0.4.0'}, 'JAVA_HOME': 'C:\\Program Files (x86)\\Java\\jd
k1.7.0_01', 'Plugins': {'html': '1.14.2', 'metadata': '1.3.0'}}
rootdir: C:\Users\yatyang\PycharmProjects\pytest_example, inifile:
plugins: metadata-1.3.0, html-1.14.2
collected 2 items 

slowTest_logging.py .F

================================== FAILURES ===================================
___________________________________ test_2 ____________________________________

    def test_2():
        log = logging.getLogger('test_2')
        time.sleep(1)
        log.debug('after 1 sec')
        time.sleep(1)
        log.debug('after 2 sec')
        time.sleep(1)
        log.debug('after 3 sec')
>       assert 0, 'failing for demo purposes'
E       AssertionError: failing for demo purposes
E       assert 0

slowTest_logging.py:25: AssertionError
---------------------------- Captured stderr call -----------------------------
DEBUG:test_2:after 1 sec
DEBUG:test_2:after 2 sec
DEBUG:test_2:after 3 sec
===================== 1 failed, 1 passed in 6.37 seconds ======================
C:\Users\yatyang\PycharmProjects\pytest_example>pytest -s slowTest_logging.py
============================= test session starts =============================
platform win32 -- Python 2.7.13, pytest-3.0.6, py-1.4.32, pluggy-0.4.0
metadata: {'Python': '2.7.13', 'Platform': 'Windows-7-6.1.7601-SP1', 'Packages': {'py': '1.4.32', 'pytest': '3.0.6', 'pluggy': '0.4.0'}, 'JAVA_HOME': 'C:\\Program Files (x86)\\Java\\jd
k1.7.0_01', 'Plugins': {'html': '1.14.2', 'metadata': '1.3.0'}}
rootdir: C:\Users\yatyang\PycharmProjects\pytest_example, inifile:
plugins: metadata-1.3.0, html-1.14.2
collected 2 items 

slowTest_logging.py DEBUG:test_1:after 1 sec
DEBUG:test_1:after 2 sec
DEBUG:test_1:after 3 sec
.DEBUG:test_2:after 1 sec
DEBUG:test_2:after 2 sec
DEBUG:test_2:after 3 sec
F

================================== FAILURES ===================================
___________________________________ test_2 ____________________________________

    def test_2():
        log = logging.getLogger('test_2')
        time.sleep(1)
        log.debug('after 1 sec')
        time.sleep(1)
        log.debug('after 2 sec')
        time.sleep(1)
        log.debug('after 3 sec')
>       assert 0, 'failing for demo purposes'
E       AssertionError: failing for demo purposes
E       assert 0

slowTest_logging.py:25: AssertionError
===================== 1 failed, 1 passed in 6.18 seconds ======================

4. pytest uses logging and –capture=no to realize real-time output of log information
. Please run the following program yourself. You can see that the program outputs the execution status of the current test case in real time.

C:\Users\yatyang\PycharmProjects\pytest_example>pytest -s slowTest_logging.py
============================= test session starts =============================
platform win32 -- Python 2.7.13, pytest-3.0.6, py-1.4.32, pluggy-0.4.0
metadata: {'Python': '2.7.13', 'Platform': 'Windows-7-6.1.7601-SP1', 'Packages': {'py': '1.4.32', 'pytest': '3.0.6', 'pluggy': '0.4.0'}, 'JAVA_HOME': 'C:\\Program Files (x86)\\Java\\jd
k1.7.0_01', 'Plugins': {'html': '1.14.2', 'metadata': '1.3.0'}}
rootdir: C:\Users\yatyang\PycharmProjects\pytest_example, inifile:
plugins: metadata-1.3.0, html-1.14.2
collected 2 items 

slowTest_logging.py DEBUG:test_1:after 1 sec
DEBUG:test_1:after 2 sec
DEBUG:test_1:after 3 sec
.DEBUG:test_2:after 1 sec
DEBUG:test_2:after 2 sec
DEBUG:test_2:after 3 sec
F

================================== FAILURES ===================================
___________________________________ test_2 ____________________________________

    def test_2():
        log = logging.getLogger('test_2')
        time.sleep(1)
        log.debug('after 1 sec')
        time.sleep(1)
        log.debug('after 2 sec')
        time.sleep(1)
        log.debug('after 3 sec')
>       assert 0, 'failing for demo purposes'
E       AssertionError: failing for demo purposes
E       assert 0

slowTest_logging.py:25: AssertionError
===================== 1 failed, 1 passed in 6.20 seconds ======================

5. Summary
When writing automated test cases, it is very necessary to add useful log information. For example, during the initial debugging process, accurate debugging information can be obtained once there is a problem with the operation. Later, during stable operation, it will be easy for other testers to run it, so everyone must pay attention to the debugging information of the test case.
Through this article, you should know how to use pytest, logging and –capture=no to output all log information in real time when running test cases.

3. Code coverage-mostly used in unit testing

Insert image description here


1.
Introduction to the previous article (---- pytest-cov):
pytest-cov is a plug-in of pytest, and its essence also refers to the python coverage library to count code coverage. The following article is for understanding only. For real projects, we all use API to call the interface, so the use of real projects will be more complicated. This will be explained next time.

Also note: Coverage is a type of statement coverage. It cannot interpret your logic. For its true meaning, it needs to be combined with the project itself. This coverage data is not very convincing, so don’t pursue it blindly.
Generally speaking:
path coverage > decision coverage > statement coverage

Install

pip install pytest-cover

After installation, there are

py.test -h 可以看到多了以下的用法,说明安装成功: coverage reporting with distributed testing support:

Example:
Create three new files, cau.py and test_conver.py in the same directory code. The run.py file is in the upper-level directory pp.
The code relationship is as follows.

Insert image description here

1. Create a new function file cau.py

#!/usr/bin/env python
# -*- coding: utf-8 -*-
def cau (type,n1, n2):

    if type==1:
        a=n1 + n2
    elif type==2:
        a = n1 - n2
    else:
        a=n1 * n2
    return a

2. Create a new test_conver.py test file:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
from code.cau import cau
class Test_cover:
    def test_add(self):
        a=cau(1,2,3)
        assert a==3

3. Create a new execution script run.py

#!/usr/bin/env ```python
# -*- coding: utf-8 -*-
import pytest

if __name__=='__main__':
	pytest.main(["--cov=./code/" ,"--cov-report=html","--cov-config=./code/.coveragerc"] )  # 执行某个目录下case

Note: The –cov parameter is followed by the test directory (after testing, a specific file cannot be specified.), the program code and the test script must be in the same file. --cov-report=html generates a report, you only need python run.py to run it

coveragerc  意思是跳过某些脚本的覆盖率测试。此处跳过test_cover.py文件跟init文件。

The content is as follows:

[run]
omit =
	tests/*
	 */__init__.py
	*/test_cover.py

After the result
is generated, you can click indexhtml directly.

You can see the following execution status. Green means running, red means not executed. Check the code logic yourself and you can conclude that the result is correct.

 

2: Next article (— coverage.py api)
Using pytest-cov, it is impossible to count the coverage of test scripts that use api to call services, but most projects basically use api to call services. So we additionally need to use coverage.py api for statistics.
When you install pytest-cov, the coverage library is installed by default.

Service startup.
If you want to scan the code, coverage related configuration must be inserted when the service is started.
I started flask here, so I added it to the flask startup code, as follows:

if __name__ == '__main__':
    cov = Coverage()
    cov.start()  # 开始检测代码
    print ("qidong")
    app.run(debug=True, host='0.0.0.0',port=9098)  #原本只有这一行
    cov.stop()  # 停止纪录
    print ("guanbi")
    cov.save()  # 保存在 .coverage 中
    print ("save")
    cov.html_report()  # 生成 HTML 报告

 

Originally we started python xx.py like this, but it is not possible now.
It needs to be changed to this, source represents the directory and xx represents the executable file.

coverage run --source='/xxx/' xx.py

The startup operation diagram is as follows:

 

If the automation is running normally, you can see the running requests

The above shows that there is no problem with your script and service.

After ctr-c stops the script, save is finally displayed. If "Coverage.py warning: No data was collected. (no-data-collected)" is displayed, there is a problem with the way the service is running. The coverage service is not running to your code.

Report generation
Enter the following command

coverage report

Enter the last step in the last step

coverage html

This saves html files.

To view the export in the window, click on a specific file and click run. You can see that the green one is running. But the problem is that you will find that some code should be executed, but is not executed. Therefore, it is difficult to say whether the coverage data is accurate.

4. allure test reporting framework

Pytest+allure is now combined with Jenkins. It’s very simple. I believe everyone can do it. If you don’t know it, you can check out my other blog about continuous integration.

5. Customized reports

Insert image description here


 

Insert image description here


 

Insert image description here


 

Insert image description here


 

Insert image description here


Customized report
Feature: Mark the main function module
Story: Mark the branch functions under the Features function module
Severity: Mark the importance level of the test case
Step: Mark the important steps of the test case
Issue and TestCase: Mark the Issue and Case, and can add the URL

1. Detailed explanation of Features customization

# -*- coding: utf-8 -*-
# @Time    : 2018/8/17 上午10:10
# @Author  : WangJuan
# @File    : test_case.py
import allure
import pytest


@allure.feature('test_module_01')
def test_case_01():
    """
    用例描述:Test case 01
    """
    assert 0
    
@allure.feature('test_module_02')
def test_case_02():
    """
    用例描述:Test case 02
    """
    assert 0 == 0
    
    
if __name__ == '__main__':
    pytest.main(['-s', '-q', '--alluredir', './report/xml'])

Add feature, and the Report display is shown in the figure below.

2. Detailed explanation of Story customization

# -*- coding: utf-8 -*-
# @Time    : 2018/8/17 上午10:10
# @Author  : WangJuan
# @File    : test_case.py
import allure
import pytest


@allure.feature('test_module_01')
@allure.story('test_story_01')
def test_case_01():
    """
    用例描述:Test case 01
    """
    assert 0

@allure.feature('test_module_01')
@allure.story('test_story_02')
def test_case_02():
    """
    用例描述:Test case 02
    """
    assert 0 == 0


if __name__ == '__main__':
    pytest.main(['-s', '-q', '--alluredir', './report/xml'])

 

Add story and Report display as shown below.

Insert image description here

3. Detailed explanation of use case title and use case description customization

# -*- coding: utf-8 -*-
# @Time    : 2018/8/17 上午10:10
# @Author  : WangJuan
# @File    : test_case.py
import allure
import pytest

@allure.feature('test_module_01')
@allure.story('test_story_01')
#test_case_01为用例title
def test_case_01():
    """
    用例描述:这是用例描述,Test case 01,描述本人
    """
    #注释为用例描述
    assert 0

if __name__ == '__main__':
    pytest.main(['-s', '-q', '--alluredir', './report/xml'])

Add the use case title and use case description. The Report display is shown in the figure below.

Insert image description here

4. Severity customization details
the definition of severity levels in Allure:
1. Blocker level: interrupt defect (the client program does not respond and cannot perform the next step)
2. Critical level: critical defect (function point is missing)
3. Normal level: Ordinary defects (numeric calculation errors)
4. Minor level: minor defects (interface errors do not meet UI requirements)
5. Trivial level: minor defects (no prompt for required input items, or the prompt is irregular)

# -*- coding: utf-8 -*-
# @Time    : 2018/8/17 上午10:10
# @Author  : WangJuan
# @File    : test_case.py
import allure
import pytest


@allure.feature('test_module_01')
@allure.story('test_story_01')
@allure.severity('blocker')
def test_case_01():
    """
    用例描述:Test case 01
    """
    assert 0

@allure.feature('test_module_01')
@allure.story('test_story_01')
@allure.severity('critical')
def test_case_02():
    """
    用例描述:Test case 02
    """
    assert 0 == 0

@allure.feature('test_module_01')
@allure.story('test_story_02')
@allure.severity('normal')
def test_case_03():
    """
    用例描述:Test case 03
    """
    assert 0

@allure.feature('test_module_01')
@allure.story('test_story_02')
@allure.severity('minor')
def test_case_04():
    """
    用例描述:Test case 04
    """
    assert 0 == 0

    
if __name__ == '__main__':
    pytest.main(['-s', '-q', '--alluredir', './report/xml'])

Add Severity, and the Report display is shown in the figure below.

Insert image description here

5. Detailed explanation of Step customization

# -*- coding: utf-8 -*-
# @Time    : 2018/8/17 上午10:10
# @Author  : WangJuan
# @File    : test_case.py
import allure
import pytest

@allure.step("字符串相加:{0},{1}")     
# 测试步骤,可通过format机制自动获取函数参数
def str_add(str1, str2):
    if not isinstance(str1, str):
        return "%s is not a string" % str1
    if not isinstance(str2, str):
        return "%s is not a string" % str2
    return str1 + str2

@allure.feature('test_module_01')
@allure.story('test_story_01')
@allure.severity('blocker')
def test_case():
    str1 = 'hello'
    str2 = 'world'
    assert str_add(str1, str2) == 'helloworld'


if __name__ == '__main__':
    pytest.main(['-s', '-q', '--alluredir', './report/xml'])

Add Step and Report display as shown below.
 

Insert image description here


6. Detailed explanation of Issue and TestCase customization

# -*- coding: utf-8 -*-
# @Time    : 2018/8/17 上午10:10
# @Author  : WangJuan
# @File    : test_case.py
import allure
import pytest


@allure.step("字符串相加:{0},{1}")     # 测试步骤,可通过format机制自动获取函数参数
def str_add(str1, str2):
    print('hello')
    if not isinstance(str1, str):
        return "%s is not a string" % str1
    if not isinstance(str2, str):
        return "%s is not a string" % str2
    return str1 + str2

@allure.feature('test_module_01')
@allure.story('test_story_01')
@allure.severity('blocker')
@allure.issue("http://www.baidu.com")
@allure.testcase("http://www.testlink.com")
def test_case():
    str1 = 'hello'
    str2 = 'world'
    assert str_add(str1, str2) == 'helloworld'


if __name__ == '__main__':
    pytest.main(['-s', '-q', '--alluredir', './report/xml'])

Add Issue and TestCase, and the Report display is shown in the figure below.
 

Insert image description here


8. Detailed explanation of attach customization

 file = open('../test.png', 'rb').read()
 allure.attach('test_img', file, allure.attach_type.PNG)

Add attachments to the report: allure.attach('arg1','arg2','arg3'):
arg1: is the name of the attachment displayed in the report
arg2: indicates the content of the added attachment
arg3: indicates the type of addition (support: HTML ,JPG,PNG,JSON,OTHER,TEXTXML)

Add the attach parameter and the Report display is shown in the figure below.

Insert image description here

6. pytest runs the specified use case

With the increase in software functions, there are more and more modules, which also means more and more use cases. In order to save execution time and quickly obtain test reports and results, you can quickly execute use cases by running specified use cases at work.

Example directory
 

Insert image description here


spec_sub1_modul_test.py

#coding: UTF-8
import pytest

def test_004_spec():
    assert 1==1
def test_005_spec():
    assert True==False
    
class Test_Class():
    def test_006_spec(self):
        assert 'G' in "Goods"

spec_sub2_modul_test.py

#coding: UTF-8
import pytest

def test_007_spec():
    assert 1==1
def test_008_spec():
    assert True==False
    
class Test_Class():
    def test_009_spec(self):
        assert 'G' in "Goods"

spec_001_test_module

#coding: UTF-8
import pytest

def test_001_spec():
    assert 1==1
def test_002_spec():
    assert True==False
    
class Test_Class():
    def test_003_spec(self):
        assert 'H' in "Hell,Jerry"

Run the specified module

if __name__ == '__main__':
    pytest.main("-v -s spec_001_modul_test.py")

Run batch folders (run all use cases in the current folder including subfolders)

#coding: UTF-8
import pytest
if __name__ == '__main__':
    pytest.main("-v -s ./")

Run the specified folder (all use cases under the subpath1 directory)

#coding: UTF-8
import pytest
if __name__ == '__main__':
    pytest.main("-v -s subpath1/")

Run the specified use case in the module (test_001_spec use case in the running module)

if __name__ == '__main__':
    pytest.main("-v -s spec_001_modul_test.py::test_001_spec")

Run the use case specified in class (run the Test_Class class test_003_spec method in the module)

if __name__ == '__main__':
   pytest.main("-v -s spec_001_modul_test.py::Test_Class::test_003_spec")

Fuzzy matching run case (matches the current directory contained below)

if __name__ == '__main__':
    #运行spec_001_modul_test模块中用例名称包含spec的用例
    pytest.main("-v -s -k spec spec_001_modul_test.py")
    #运行当前文件夹匹配Test_Class的用例,类文件下面的用例
    pytest.main('-s -v -k Test_Class')

7. Conduct a certain range of tests according to the level of importance.

Insert image description here


This mark is used to identify the level of test cases or test classes, which are divided into five levels: blocker, critical, normal, minor, and trivial. Let’s mark the test cases by level and view the test report.

Insert image description here

8. Add detailed description for the test @allure.description;@allure.title;

1.title casetitle

You can customize the use case title, which defaults to the function name.

@allure.title

# -*- coding: utf-8 -*-
# @Time    : 2019/3/12 11:46
# @Author  : zzt

import allure
import pytest


@allure.title("用例标题0")
def test_0():
    pass

@allure.title("用例标题1")
def test_1():
    pass


def test_2():
    pass

Execution effect:

Insert image description here

  1. illustrate

Detailed descriptions of the test can be added to provide the report reader with as much context as needed.

Two ways: @allure.description provides a decorator describing the string

@allure.description_html provides some HTML in the description part of the test case (to be studied)

# -*- coding: utf-8 -*-
# @Time    : 2019/3/12 11:46
# @Author  : zzt

import allure
import pytest

@allure.title("用例标题0")
@allure.description("这里是对test_0用例的一些详细说明")
def test_0():
    pass

@allure.title("用例标题1")
def test_1():
    pass

@allure.title("用例标题2")
def test_2():
    pass

Insert image description here

9, 链接@allure.link @allure.issue @allure.testcase

Insert image description here


@allure.link @allure.issue @allure.testcase

# -*- coding: utf-8 -*-
# @Time    : 2019/3/12 11:46
# @Author  : zzt

import allure
import pytest

@allure.feature('这里是一级标签')
class TestAllure():

    @allure.title("用例标题0")
    @allure.story("这里是第一个二级标签")
    @pytest.mark.parametrize('param', ['青铜', '白银', '黄金'])
    def test_0(self, param):
        allure.attach('附件内容是: '+param, '我是附件名', allure.attachment_type.TEXT)

    @allure.title("用例标题1")
    @allure.story("这里是第二个二级标签")
    def test_1(self):
        allure.attach.file(r'E:\Myproject\pytest-allure\test\test_1.jpg', '我是附件截图的名字', attachment_type=allure.attachment_type.JPG)

    @allure.title("用例标题2")
    @allure.story("这里是第三个二级标签")
    @allure.issue('http://baidu.com', name='点击我跳转百度')
    @allure.testcase('http://bug.com/user-login-Lw==.html', name='点击我跳转禅道')
    def test_2(self):
        pass

The execution results are as follows:

Insert image description here

7. Unit automated testing pytest and allure apply automatic execution in testing

1. Unit test report display

Insert image description here

2. Write the driver in conftest, scope the session, and use addfinalizer to close the browser after the test.

Insert image description here

3. Front-end automated testing - Baidu search function practical demonstration

Insert image description here


Reports can display many different types of attachments to supplement tests, steps, etc.

allure.attach(body, name, attachment_type, extension)

body - the original content of the file to be written.

name - a string containing the file name

attachment_type - one of allure.attachment_type values

extension - the provided extension that will be used to create the file

或者 allure.attach.file(source, name, attachment_type, extension)

source - a string containing the file path.

# -*- coding: utf-8 -*-
# @Time    : 2019/3/12 11:46
# @Author  : zzt

import allure
import pytest

@allure.feature('这里是一级标签')
class TestAllure():

    @allure.title("用例标题0")
    @allure.story("这里是第一个二级标签")
    @pytest.mark.parametrize('param', ['青铜', '白银', '黄金'])
    def test_0(self, param):
        allure.attach('附件内容是: '+param, '我是附件名', allure.attachment_type.TEXT)

    @allure.title("用例标题1")
    @allure.story("这里是第二个二级标签")
    def test_1(self):
        allure.attach.file(r'E:\Myproject\pytest-allure\test\test_1.jpg', '我是附件截图的名字', attachment_type=allure.attachment_type.JPG)

    @allure.title("用例标题2")
    @allure.story("这里是第三个二级标签")
    @allure.severity(allure.severity_level.NORMAL)
    def test_2(self):
        pass

The execution results are as follows:
 

Insert image description here


 

Insert image description here

Insert image description here

4. Source code: Github: github.com/linda883/py…

5. CI/CD uses jenkins for continuous integration

I believe everyone knows how to integrate in Jenkins, so I won’t talk about it, or you can read my continuous integration blog.
 

Insert image description here


 

Insert image description here

Insert image description here

Finally: The following is the supporting learning materials. For those who are doing [software testing], it should be the most comprehensive and complete preparation warehouse. This warehouse has also accompanied me through the most difficult journey. I hope it can also help you!

Software testing interview applet

A software test question bank that has been used by millions of people! ! ! Who is who knows! ! ! The most comprehensive interview test mini program on the Internet, you can use your mobile phone to answer questions, take the subway, bus, and roll it up!

Covers the following interview question sections:

1. Basic theory of software testing, 2. web, app, interface function testing, 3. network, 4. database, 5. linux

6. Web, app, interface automation, 7. Performance testing, 8. Programming basics, 9. HR interview questions, 10. Open test questions, 11. Security testing, 12. Computer basics

  How to obtain the full set of information: Click on the small card below to get it yourself

 

Guess you like

Origin blog.csdn.net/weixin_57794111/article/details/132805783