Pytest automated testing - some plug-ins you must know

Pytest has a rich plug-in architecture, more than 800 external plug-ins and an active community, and is identified by "pytest-*" in the PyPI project.

This article will list some plug-ins with more than 200 github stars for practical demonstration.

Plug-in library address: http://plugincompat.herokuapp.com/


1. pytest-html: used to generate HTML reports

For a complete test, the test report is essential, but the test results of pytest itself are too simple, and pytest-html can provide you with a clear report.

Install:
pip install -U pytest-html
Example:

# test_sample.py
import pytest
# import time

# 被测功能
def add(x, y):
    # time.sleep(1)
    return x + y

# 测试类
class TestLearning:
    data = [
        [3, 4, 7],
        [-3, 4, 1],
        [3, -4, -1],
        [-3, -4, 7],
    ]
    @pytest.mark.parametrize("data", data)
    def test_add(self, data):
        assert add(data[0], data[1]) == data[2]

run:

E:\workspace-py\Pytest>pytest test_sample.py --html=report/index.html
========================================================================== test session starts ==========================================================================
platform win32 -- Python 3.7.3, pytest-6.0.2, py-1.9.0, pluggy-0.13.0
rootdir: E:\workspace-py\Pytest
plugins: allure-pytest-2.8.18, cov-2.10.1, html-3.0.0, rerunfailures-9.1.1, xdist-2.1.0
collected 4 items                                                                                                                                                        

test_sample.py ...F                                                                                                                                                [100%]

=============================================================================== FAILURES ================================================================================
_____________________________________________________________________ TestLearning.test_add[data3] ______________________________________________________________________

self = <test_sample.TestLearning object at 0x00000000036B6AC8>, data = [-3, -4, 7]

    @pytest.mark.parametrize("data", data)
    def test_add(self, data):
>       assert add(data[0], data[1]) == data[2]
E       assert -7 == 7
E        +  where -7 = add(-3, -4)

test_sample.py:20: AssertionError
------------------------------------------------- generated html file: file://E:\workspace-py\Pytest\report\index.html --------------------------------------------------
======================================================================== short test summary info ========================================================================
FAILED test_sample.py::TestLearning::test_add[data3] - assert -7 == 7
====================================================================== 1 failed, 3 passed in 0.14s ======================================================================

After running, an html file and css style folder assets will be generated. Open the html with a browser to view clear test results.

 

I will update a clearer and more beautiful test report plug-in later:  allure-python


2. pytest-cov: used to generate coverage reports

When doing unit testing, code coverage is often used as an indicator to measure the quality of testing. Code coverage is even used to assess the completion of test tasks.

Install:
pip install -U pytest-cov
 run:

E:\workspace-py\Pytest>pytest --cov=.
========================================================================== test session starts ==========================================================================
platform win32 -- Python 3.7.3, pytest-6.0.2, py-1.9.0, pluggy-0.13.0
rootdir: E:\workspace-py\Pytest
plugins: allure-pytest-2.8.18, cov-2.10.1, html-3.0.0, rerunfailures-9.1.1, xdist-2.1.0
collected 4 items                                                                                                                                                        

test_sample.py ....                                                                                                                                                [100%]

----------- coverage: platform win32, python 3.7.3-final-0 -----------
Name             Stmts   Miss  Cover
------------------------------------
conftest.py          5      3    40%
test_sample.py       7      0   100%
------------------------------------
TOTAL               12      3    75%


=========================================================================== 4 passed in 0.06s ===========================================================================


3. pytest-xdist: realize multi-threading and multi-platform execution

Speed ​​up the run by sending the test to multiple CPUs. You can use -n NUMCPUS to specify a specific number of CPUs, or use -n auto to automatically identify the number of CPUs and use them all.

Install:
pip install -U pytest-xdist
Example:

# test_sample.py
import pytest
import time

# 被测功能
def add(x, y):
    time.sleep(3)
    return x + y

# 测试类
class TestAdd:
    def test_first(self):
        assert add(3, 4) == 7

    def test_second(self):
        assert add(-3, 4) == 1

    def test_three(self):
        assert add(3, -4) == -1

    def test_four(self):
        assert add(-3, -4) == 7

 run:
E:\workspace-py\Pytest>pytest test_sample.py
========================================================================== test session starts ==========================================================================
platform win32 -- Python 3.7.3, pytest-6.0.2, py-1.9.0, pluggy-0.13.0
rootdir: E:\workspace-py\Pytest
plugins: allure-pytest-2.8.18, cov-2.10.1, html-3.0.0, rerunfailures-9.1.1, xdist-2.1.0
collected 4 items                                                                                                                                                        

test_sample.py ....                                                                                                                                                [100%]

========================================================================== 4 passed in 12.05s ===========================================================================

E:\workspace-py\Pytest>pytest test_sample.py -n auto
========================================================================== test session starts ==========================================================================
platform win32 -- Python 3.7.3, pytest-6.0.2, py-1.9.0, pluggy-0.13.0
rootdir: E:\workspace-py\Pytest
plugins: allure-pytest-2.8.18, assume-2.3.3, cov-2.10.1, forked-1.3.0, html-3.0.0, rerunfailures-9.1.1, xdist-2.1.0
gw0 [4] / gw1 [4] / gw2 [4] / gw3 [4]
....                                                                                                                                                               [100%]
=========================================================================== 4 passed in 5.35s ===========================================================================

E:\workspace-py\Pytest>pytest test_sample.py -n 2
========================================================================== test session starts ==========================================================================
platform win32 -- Python 3.7.3, pytest-6.0.2, py-1.9.0, pluggy-0.13.0
rootdir: E:\workspace-py\Pytest
plugins: allure-pytest-2.8.18, assume-2.3.3, cov-2.10.1, forked-1.3.0, html-3.0.0, rerunfailures-9.1.1, xdist-2.1.0
gw0 [4] / gw1 [4]
....                                                                                                                                                               [100%]
=========================================================================== 4 passed in 7.65s ===========================================================================

 
 

The above are performed without multiple concurrency, enabling 4 CPUs, and enabling 2 CPUs. Judging from the running time-consuming results, it is obvious that multiple concurrency can greatly reduce the running time of your test cases.


4. pytest-rerunfailures: implement re-run failed test cases

 We may have some indirect faults during testing, such as network fluctuations in the interface test, delays in refreshing of individual plug-ins in the web test, etc. Re-running at this time can help us eliminate these faults.

 Install:
pip install -U pytest-rerunfailures
run:
E:\workspace-py\Pytest>pytest test_sample.py --reruns 3
========================================================================== test session starts ==========================================================================
platform win32 -- Python 3.7.3, pytest-6.0.2, py-1.9.0, pluggy-0.13.0
rootdir: E:\workspace-py\Pytest
plugins: allure-pytest-2.8.18, cov-2.10.1, html-3.0.0, rerunfailures-9.1.1, xdist-2.1.0
collected 4 items                                                                                                                                                        

test_sample.py ...R                                                                                                                                                [100%]R
 [100%]R [100%]F [100%]

=============================================================================== FAILURES ================================================================================
___________________________________________________________________________ TestAdd.test_four ___________________________________________________________________________

self = <test_sample.TestAdd object at 0x00000000045FBF98>

    def test_four(self):
>       assert add(-3, -4) == 7
E       assert -7 == 7
E        +  where -7 = add(-3, -4)

test_sample.py:22: AssertionError
======================================================================== short test summary info ========================================================================
FAILED test_sample.py::TestAdd::test_four - assert -7 == 7
================================================================= 1 failed, 3 passed, 3 rerun in 0.20s ==================================================================

 
 

If you want to set the retry interval , you can use the --rerun-delay parameter to specify the delay length (in seconds); 

If you want to rerun a specific error , you can use the --only-rerun parameter to specify a regular expression match, and you can use it multiple times to match multiple times.

pytest --reruns 5 --reruns-delay 1 --only-rerun AssertionError --only-rerun ValueError

If you only want to mark a single test for automatic rerun when it fails, you can add pytest.mark.flaky() and specify the number of retries and delay interval.

@pytest.mark.flaky(reruns=5, reruns_delay=2)
def test_example():
    import random
    assert random.choice([True, False])


5. pytest-randomly: implement random sorting test

Very large amounts of randomness in your tests make it easier to find hidden flaws in the tests themselves and provide more coverage for your system.

Install:
pip install -U pytest-randomly
run:

E:\workspace-py\Pytest>pytest test_sample.py
========================================================================== test session starts ==========================================================================
platform win32 -- Python 3.7.3, pytest-6.0.2, py-1.9.0, pluggy-0.13.0
Using --randomly-seed=3687888105
rootdir: E:\workspace-py\Pytest
plugins: allure-pytest-2.8.18, cov-2.10.1, html-3.0.0, randomly-3.5.0, rerunfailures-9.1.1, xdist-2.1.0
collected 4 items                                                                                                                                                        

test_sample.py F...                                                                                                                                                [100%]

=============================================================================== FAILURES ================================================================================
___________________________________________________________________________ TestAdd.test_four ___________________________________________________________________________

self = <test_sample.TestAdd object at 0x000000000567AD68>

    def test_four(self):
>       assert add(-3, -4) == 7
E       assert -7 == 7
E        +  where -7 = add(-3, -4)

test_sample.py:22: AssertionError
======================================================================== short test summary info ========================================================================
FAILED test_sample.py::TestAdd::test_four - assert -7 == 7
====================================================================== 1 failed, 3 passed in 0.13s ======================================================================

E:\workspace-py\Pytest>pytest test_sample.py
========================================================================== test session starts ==========================================================================
platform win32 -- Python 3.7.3, pytest-6.0.2, py-1.9.0, pluggy-0.13.0
Using --randomly-seed=3064422675
rootdir: E:\workspace-py\Pytest
plugins: allure-pytest-2.8.18, assume-2.3.3, cov-2.10.1, forked-1.3.0, html-3.0.0, randomly-3.5.0, rerunfailures-9.1.1, xdist-2.1.0
collected 4 items                                                                                                                                                        

test_sample.py ...F                                                                                                                                                [100%]

=============================================================================== FAILURES ================================================================================
___________________________________________________________________________ TestAdd.test_four ___________________________________________________________________________

self = <test_sample.TestAdd object at 0x00000000145EA940>

    def test_four(self):
>       assert add(-3, -4) == 7
E       assert -7 == 7
E        +  where -7 = add(-3, -4)

test_sample.py:22: AssertionError
======================================================================== short test summary info ========================================================================
FAILED test_sample.py::TestAdd::test_four - assert -7 == 7
====================================================================== 1 failed, 3 passed in 0.12s ======================================================================

This feature is enabled by default, but can be disabled via a flag (if you don't need this module, it is recommended not to install it).

pytest -p no:randomly

If you want to specify a random order , you can specify it through the --randomly-send parameter, or you can use the last value to specify that the last running order is used.

pytest --randomly-seed=4321
pytest --randomly-seed=last


6. Other active plug-ins

There are also some other functions that are more active, some that are specially customized for individual frameworks, and in order to be compatible with other testing frameworks. I will not demonstrate them here for the time being. I will simply list them:

pytest-django : For testing Django applications (Python web framework).

pytest-flask : For testing Flask applications (Python web framework).

pytest-splinter : Compatible with Splinter web automated testing tool.

pytest-selenium : Compatible with Selenium web automation testing tool.

pytest-testinfra : Test the actual status of servers configured by management tools such as Salt, Ansible, Puppet, Chef, etc.

pytest-mock : Provides a mock firmware to create virtual objects to implement individual dependencies in testing.

pytest-factoryboy : Used in conjunction with the factoryboy tool to generate a variety of data.

pytest-qt : Provides writing tests for PyQt5 and PySide2 applications.

pytest-asyncio : For testing asynchronous code using pytest.

pytest-bdd : Implements a subset of the Gherkin language to enable automated project requirements testing and facilitate behavior-driven development.

pytest-watch : Provides a set of quick CLI tools for pytest.

pytest-testmon : can automatically select and re-execute tests affected only by recent changes.

pytest-assume : used to allow multiple failures per test.

pytest-ordering : Ordering functionality for test cases.

pytest-sugar : Shows failures and errors immediately with a progress bar.

pytest-dev / pytest-repeat : Single or multiple tests can be executed repeatedly (a specifiable number of times).

This may be the most detailed pytest automated testing framework tutorial for Bilibili, with a full 100 hours of actual practice! ! !

Guess you like

Origin blog.csdn.net/ada4656/article/details/135116552