Gao Fushuai in the unit testing world, Pytest Framework (3) Use Case Marking and Test Execution

pytest use case marking and test execution

In the previous article, we introduced the pre- and post-processing methods and fixture mechanism of pytest. This chapter mainly introduces you to the marking mechanism and use case execution method in pytest. pytest can pass data into the test function through tags, or filter the executed use cases through tags, and then go directly to the topic.

1. Built-in tags in pytest

The use of pytest marks needs to be used through the pytest.mark. mark. There are many built-in marks in pytest to cope with various test scenarios.

1.1. pytest.mark.parametrize: mark of use case parameterization

Parametrize can separate the use case data and the logic code of use case execution, and automatically generate test cases based on the use cases.

Demo:

@pytest.mark.parametrize('item',[11,22,33,44,55,66])
def test_demo(item)
	assert item > 50

1.2. pytest.mark.skip: Skip use case execution

Use cases decorated with skip will be unconditionally skipped during execution.

Parameter reason: the reason for skipping the test function.

Demo

# 不写跳过原因
@pytest.mark.skip
def test_demo()
	assert item > 50

# 写跳过原因
@pytest.mark.skip(reason='不需要执行')
def test_demo()
	assert item > 50

1.3. pytest.mark.skipif: skip use cases based on conditions

Skipif can decide whether to skip the execution of the test case based on the condition. If the condition is True, the test function execution will be skipped.

Parameters: condition —condition to skip

Parameters: reason — Reason for skipping

Demo

a = 10
@pytest.mark.skipif(a > 20,reason='条件不成立,不执行')
def test_demo()
	assert item > 50

1.4. pytest.mark.xfail: mark use cases that are expected to fail

xfail can mark a test function as a test case that is expected to fail.

Parameters: condition — the condition (True/False) to mark the test function as xfail

Parameters: reason — the reason why the test function was marked as xfail

Parameters: raises —Exception types for expected failures

Parameters: run — whether the test function should actually be executed. If False, the function will always xfail and will not be executed.

Parameters: strict — strict mode (True/False)

Demo

a = 10
@pytest.mark.xfail(a > 20,reason='条件不成立,不执行' raises=AssertionError )
def test_demo()
	assert item > 50

1.5. pytest.mark.usefixtures: Set test fixtures for test classes or modules

The usefixtures tag is generally used to uniformly set test fixtures for the test methods under the test class.

Demo

# TestDome这个测试类的所有测试用例均执行my_fixture这个夹具
@pytest.mark.usefixtures('my_fixture这个夹具')
class TestDome:
    # 函数用例 指定测试夹具
    def test_02(self):
        print('----测试用例:test_01------')

    # 函数用例 指定测试夹具
    def test_03(self):
        print('----测试用例:test_02------')

2. Custom tags

pytest supports registering custom tags through the pytest.ini file. Filter the use cases by tags when executing them to meet the requirements.

2.1. Registration mark

The syntax of the pytest.ini file registration tag is as follows:

[pytest]

markers =
    标记1
    标记2

2.2. Marking function

Demo:

# 用例前面加载标签:@pytest.mark.标签名  
@pytest.mark.main
 def test_demo():
    pass

2.3. Tag class

Demo:

# 方式一:直接类上面打标记
@pytest.mark.main
class TestClass(object):
    def test_demo1(self):
        assert 10 > 20
  
# 方式二:通过类属性pytestmark,可以同时添加多个标记
class TestClass(object):
    pytestmark = [pytest.mark.main, pytest.mark.main]
  
    def test_demo1(self):
        assert 10 > 20

3. Filter use case execution through tags

Demo: The existing use cases are as follows:

import pytest

@pytest.mark.yuze
@pytest.mark.musen
def test_01():
    print("用例一")

def test_02():
    print("用例二")

@pytest.mark.musen
def test_03():
    print("用例三")

@pytest.mark.musen
def test_04():
    print("用例四")

@pytest.mark.yuze
def test_05():
    print("用例五")

@pytest.mark.yuze
def test_06():
    print("用例六")

There are 6 test cases in the demo above, which are marked through pytest.mark.yuze and pytest.mark.musen respectively. Next, let’s take a look at how to select test cases for execution through marking.

3.1. Filter by single mark

Syntax: pytest -m 'tag name'

Demo: pytest -m mouse

The execution results are as follows:

========================== test session starts ==========================
platform win32 -- Python 3.7.3, pytest-5.4.2, py-1.8.0, pluggy-0.13.0
rootdir: C:\project\, inifile: pytest.ini
plugins: allure-pytest-2.8.15, Faker-8.11.0, metadata-1.9.0, parallel-0.0.8, repeat-0.8.0, rerunfailures-9.0, testreport-1.1.2
collected 6 items / 3 deselected / 3 selected                                                                                                               
test_mode.py ...      [100%]
========================== 3 passed, 3 deselected in 0.29s ========================== 

You can see that the execution result shows that 3 use cases were executed and 3 were not selected.

3.2. Select multiple markers at the same time

Syntax: pytest -m "mark1 or mark2"

Command: pytest -m "musen ro yuze"

Execute use cases marked by musen or yuze. The execution results are as follows:

========================== test session starts ==========================
platform win32 -- Python 3.7.3, pytest-5.4.2, py-1.8.0, pluggy-0.13.0
rootdir: C:\project\, inifile: pytest.ini
plugins: allure-pytest-2.8.15, Faker-8.11.0, metadata-1.9.0, parallel-0.0.8, repeat-0.8.0, rerunfailures-9.0, testreport-1.1.2
collected 6 items / 1 deselected / 5 selected                                                                                                               
test_mode.py .....      [100%]
========================== 5 passed, 1 deselected in 0.29s ========================== 

As can be seen from the above results, as long as either musen or yuze is added,

Syntax: pytest -m "mark1 and mark2"

Command: pytest -m "musen and yuze"

Execute use cases marked simultaneously by the two tags musen and yuze. The execution results are as follows

========================== test session starts ==========================
platform win32 -- Python 3.7.3, pytest-5.4.2, py-1.8.0, pluggy-0.13.0
rootdir: C:\project\, inifile: pytest.ini
plugins: allure-pytest-2.8.15, Faker-8.11.0, metadata-1.9.0, parallel-0.0.8, repeat-0.8.0, rerunfailures-9.0, testreport-1.1.2
collected 6 items / 5 deselected / 1 selected                                                                                                               
test_mode.py .      [100%]
========================== 1 passed, 5 deselected in 0.29s ========================== 

The next article will explain the test report to you.
Below are some of the materials I used when studying. Friends who need it can leave a message in the comment area.

Guess you like

Origin blog.csdn.net/a448335587/article/details/132816227