Popular on the whole network, Pytest automated testing framework - test case execution and failure rerun (details in actual combat)


foreword

Use Case Execution Status

After the execution of the use case is completed, each use case has its own state. The common states are
passed: the test passed
failed: the assertion failed
error: the quality of the use case itself is not good, and the code itself reports an error (for example: the fixture does not exist, and there is an error in the fixture)
xfail: expected failure, added @pytest.mark.xfail()

error example 1: parameter does not exist

def pwd():
    print("获取用户名")
    a = "yygirl"
    assert a == "yygirl123"


def test_1(pwd):
    assert user == "yygirl"

Why is the error
pwd parameter does not exist, so the use case executes error

error example 2: Fixture is wrong

@pytest.fixture()
def user():
    print("获取用户名")
    a = "yygirl"
    assert a == "yygirl123"
    return a


def test_1(user):
    assert user == "yygirl"

Why is it an error?
The assertion in the fixture fails, so the fixture will report an error;
because test_1 calls the wrong fixture, the error indicates that there is a problem with the use case

failedExample 1

@pytest.fixture()
def pwd():
    print("获取密码")
    a = "yygirl"
    return a


def test_2(pwd):
    assert pwd == "yygirl123"

Why is failed
because the variable assertion returned by fixture failed

failedExample 2

@pytest.fixture()
def pwd():
    print("获取密码")
    a = "polo"
    return a


def test_2(pwd):
    raise NameError
    assert pwd == "polo"

Why is it failed?
Because an exception was thrown during use case execution.

If the code of the test case has an exception, including actively throwing an exception or the code has an exception, it is considered failed;
when the fixture called by the test case has an exception, or the parameters passed in have an exception, it is considered an error;
if a test report contains , the larger the number of error test cases, the worse the quality of the test cases;

example of xfail

断言装饰器
@pytest.mark.xfail(raises=ZeroDivisionError)
def test_f():
    1 / 0

Why xfail?
There is an exception in the code, and it matches the raised exception class, so it is xfail (a type of test passed, indicating that the exception caught is in line with expectations), not failed

If it does not match the raised exception class, it is failed

Failed rerun plugin pytest-rerunfailures

Environmental
Prerequisites The following prerequisites are required to use pytest-rerunfailures
Python 3.5, up to 3.8, or PyPy3
pytest 5.0 or later

install plugin

pip3 install pytest-rerunfailures -i http://pypi.douban.com/simple/ --trusted-host pypi.douban.com

Plug-in focus:
Command line parameters: –reruns n (number of reruns), –reruns-delay m (number of seconds to wait to run)
decorator parameters: reruns=n (number of times to rerun), reruns_delay=m (number of seconds to wait to run)

Rerun all failed tests
To rerun all failed tests, use the --reruns command line option, specifying the maximum number of times to run the tests:

pytest --reruns 5 -s 

Fixtures or setup_classes that fail to run will also be re-executed

Adding a reruns delay
To add a delay between retries, use the --reruns-delay command line option, specifying the number of seconds to wait before the next test restarts

pytest --reruns 5 --reruns-delay 10 -s

Rerun the specified test case
To add the flaky decorator @pytest.mark.flaky(reruns=5) to a single test case, and automatically rerun when the test fails, you need to specify the maximum number of reruns

example:

import pytest


@pytest.mark.flaky(reruns=5)
def test_example():
    import random
    assert random.choice([True, False, False])

Results of the

collecting ... collected 1 item

11_reruns.py::test_example RERUN                                         [100%]
11_reruns.py::test_example PASSED                                        [100%]

========================= 1 passed, 1 rerun in 0.05s ==========================

Similarly, this can also specify the waiting time for rerun

@pytest.mark.flaky(reruns=5, reruns_delay=2)
def test_example():
    import random
    assert random.choice([True, False, False])

Note:
If the number of reruns for use cases is specified, adding --reruns to the command line will not take effect for these use cases

Compatibility issue
Cannot be used with fixture decorator: @pytest.fixture()
This plugin is not compatible with pytest-xdist's --looponfail flag
This plugin is not compatible with core --pdb flag

The following is the most complete software test engineer learning knowledge architecture system diagram in 2023 that I compiled

1. From entry to mastery of Python programming

Please add a picture description

2. Interface automation project actual combat

Please add a picture description

3. Actual Combat of Web Automation Project

Please add a picture description

4. Actual Combat of App Automation Project

Please add a picture description

5. Resume of first-tier manufacturers

Please add a picture description

6. Test and develop DevOps system

Please add a picture description

7. Commonly used automated testing tools

Please add a picture description

Eight, JMeter performance test

Please add a picture description

9. Summary (little surprise at the end)

Only with the courage to constantly surpass oneself can the dream come out of the cocoon and bloom the most beautiful flowers. No matter how difficult the front is, keep fighting and embrace a hard-working life, and the light of success will always shine on your future.

Only by doing our best can we surpass ourselves; only by striving constantly can we create miracles; only by persevering can we welcome glory. Believe in yourself and work hard, the future will be your magnificent chapter!

Only with unremitting efforts can we pursue unattainable success; every setback is an opportunity for sharpening, as long as you don't give up, victory is waiting ahead. Believe in yourself and move forward bravely, you will create your own brilliance.

Guess you like

Origin blog.csdn.net/m0_70102063/article/details/131516800