pytest Capítulo 1 Casos de uso de instalación y ejecución

notas de estudio del marco pytest

Resumen: notas de estudio del marco de pytest, registros de puntos de conocimiento relacionados con pytest, casos de uso de instalación y ejecución de pytest.

operación simple pytest

Requisitos previos para aprender:

  • He aprendido la sintaxis de Python.
  • Más información sobre las pruebas
  • python3 y pycharm instalados

Instalar pytest

Después de crear el proyecto en pycharm, cree un nuevo archivo que comience con **test_**. Primero, ejecute la siguiente instrucción en la terminal para instalar pytest:

pip3 install pytest

Ver versión pytest

Método 1: ejecutar pip3 show pytest

(venv) zydeMacBook-Air:learnpytest zy$ pip3 show pytest
Name: pytest
Version: 6.2.5
Summary: pytest: simple powerful testing with Python
Home-page: https://docs.pytest.org/en/latest/
Author: Holger Krekel, Bruno Oliveira, Ronny Pfannschmidt, Floris Bruynooghe, Brianna Laugher, Florian Bruhin and others
Author-email: 
License: MIT
Location: /Users/zy/PycharmProjects/learnpytest/venv/lib/python3.7/site-packages
Requires: attrs, importlib-metadata, iniconfig, packaging, pluggy, py, toml
Required-by: 
(venv) zydeMacBook-Air:learnpytest zy$ 

Método 2: ejecutar pytest --version

(venv) zydeMacBook-Air:learnpytest zy$ pytest --version
pytest 6.2.5

Ver los parámetros de la línea de comando de pytest

Puede usar pytest -h o pytest --help para ver los parámetros de la línea de comando de pytest.

(venv) zydeMacBook-Air:learnpytest zy$ pytest -h
usage: pytest [options] [file_or_dir] [file_or_dir] [...]

positional arguments:
  file_or_dir

general:
  -k EXPRESSION         only run tests which match the given substring expression. An expression is a python evaluatable expression where
                        all names are substring-matched against test names and their parent classes. Example: -k 'test_method or test_other'
                        matches all test functions and classes whose name contains 'test_method' or 'test_other', while -k 'not test_method'
                        matches those that don't contain 'test_method' in their names. -k 'not test_method and not test_other' will
                        eliminate the matches. Additionally keywords are matched to classes and functions containing extra names in their
                        'extra_keyword_matches' set, as well as functions which have names assigned directly to them. The matching is case-
                        insensitive.
  -m MARKEXPR           only run tests matching given mark expression.
                        For example: -m 'mark1 and not mark2'.
  --markers             show markers (builtin, plugin and per-project ones).
  -x, --exitfirst       exit instantly on first error or failed test.
  --fixtures, --funcargs
                        show available fixtures, sorted by plugin appearance (fixtures with leading '_' are only shown with '-v')
  --fixtures-per-test   show fixtures per test
  --pdb                 start the interactive Python debugger on errors or KeyboardInterrupt.
  --pdbcls=modulename:classname
                        start a custom interactive Python debugger on errors. For example: --pdbcls=IPython.terminal.debugger:TerminalPdb
  --trace               Immediately break when running each test.
  --capture=method      per-test capturing method: one of fd|sys|no|tee-sys.
  -s                    shortcut for --capture=no.
  --runxfail            report the results of xfail tests as if they were not marked
  --lf, --last-failed   rerun only the tests that failed at the last run (or all if none failed)
  --ff, --failed-first  run all tests, but run the last failures first.
                        This may re-order tests and thus lead to repeated fixture setup/teardown.
  --nf, --new-first     run tests from new files first, then the rest of the tests sorted by file mtime
  --cache-show=[CACHESHOW]
                        show cache contents, don't perform collection or tests. Optional argument: glob (default: '*').
  --cache-clear         remove all cache contents at start of test run.
  --lfnf={
    
    all,none}, --last-failed-no-failures={
    
    all,none}
                        which tests to run with no previously (known) failures.
  --sw, --stepwise      exit on test failure and continue from last failing test next time
  --sw-skip, --stepwise-skip
                        ignore the first failing test but stop on the next failing test

reporting:
  --durations=N         show N slowest setup/test durations (N=0 for all).
  --durations-min=N     Minimal duration in seconds for inclusion in slowest list. Default 0.005
  -v, --verbose         increase verbosity.
  --no-header           disable header
  --no-summary          disable summary
  -q, --quiet           decrease verbosity.
  --verbosity=VERBOSE   set verbosity. Default is 0.
  -r chars              show extra test summary info as specified by chars: (f)ailed, (E)rror, (s)kipped, (x)failed, (X)passed, (p)assed,
                        (P)assed with output, (a)ll except passed (p/P), or (A)ll. (w)arnings are enabled by default (see --disable-
                        warnings), 'N' can be used to reset the list. (default: 'fE').
  --disable-warnings, --disable-pytest-warnings
                        disable warnings summary
  -l, --showlocals      show locals in tracebacks (disabled by default).
  --tb=style            traceback print mode (auto/long/short/line/native/no).
  --show-capture={
    
    no,stdout,stderr,log,all}
                        Controls how captured stdout/stderr/log is shown on failed tests. Default is 'all'.
  --full-trace          don't cut any tracebacks (default is to cut).
  --color=color         color terminal output (yes/no/auto).
  --code-highlight={
    
    yes,no}
                        Whether code should be highlighted (only if --color is also enabled)
  --pastebin=mode       send failed|all info to bpaste.net pastebin service.
  --junit-xml=path      create junit-xml style report file at given path.
  --junit-prefix=str    prepend prefix to classnames in junit-xml output

pytest-warnings:
  -W PYTHONWARNINGS, --pythonwarnings=PYTHONWARNINGS
                        set which warnings to report, see -W option of python itself.
  --maxfail=num         exit after first num failures or errors.
  --strict-config       any warnings encountered while parsing the `pytest` section of the configuration file raise errors.
  --strict-markers      markers not registered in the `markers` section of the configuration file raise errors.
  --strict              (deprecated) alias to --strict-markers.
  -c file               load configuration from `file` instead of trying to locate one of the implicit configuration files.
  --continue-on-collection-errors
                        Force test execution even if collection errors occur.
  --rootdir=ROOTDIR     Define root directory for tests. Can be relative path: 'root_dir', './root_dir', 'root_dir/another_dir/'; absolute
                        path: '/home/user/root_dir'; path with variables: '$HOME/root_dir'.

collection:
  --collect-only, --co  only collect tests, don't execute them.
  --pyargs              try to interpret all arguments as python packages.
  --ignore=path         ignore path during collection (multi-allowed).
  --ignore-glob=path    ignore path pattern during collection (multi-allowed).
  --deselect=nodeid_prefix
                        deselect item (via node id prefix) during collection (multi-allowed).
  --confcutdir=dir      only load conftest.py's relative to specified dir.
  --noconftest          Don't load any conftest.py files.
  --keep-duplicates     Keep duplicate tests.
  --collect-in-virtualenv
                        Don't ignore tests in a local virtualenv directory
  --import-mode={
    
    prepend,append,importlib}
                        prepend/append to sys.path when importing test modules and conftest files, default is to prepend.
  --doctest-modules     run doctests in all .py modules
  --doctest-report={
    
    none,cdiff,ndiff,udiff,only_first_failure}
                        choose another output format for diffs on doctest failure
  --doctest-glob=pat    doctests file matching pattern, default: test*.txt
  --doctest-ignore-import-errors
                        ignore doctest ImportErrors
  --doctest-continue-on-failure
                        for a given doctest, continue to run after the first failure

test session debugging and configuration:
  --basetemp=dir        base temporary directory for this test run.(warning: this directory is removed if it exists)
  -V, --version         display pytest version and information about plugins.When given twice, also display information about plugins.
  -h, --help            show help message and configuration info
  -p name               early-load given plugin module name or entry point (multi-allowed).
                        To avoid loading of plugins, use the `no:` prefix, e.g. `no:doctest`.
  --trace-config        trace considerations of conftest.py files.
  --debug               store internal tracing debug information in 'pytestdebug.log'.
  -o OVERRIDE_INI, --override-ini=OVERRIDE_INI
                        override ini option with "option=value" style, e.g. `-o xfail_strict=True -o cache_dir=cache`.
  --assert=MODE         Control assertion debugging tools.
                        'plain' performs no assertion debugging.
                        'rewrite' (the default) rewrites assert statements in test modules on import to provide assert expression
                        information.
  --setup-only          only setup fixtures, do not execute tests.
  --setup-show          show setup of fixtures while executing tests.
  --setup-plan          show what fixtures and tests would be executed but don't execute anything.

logging:
  --log-level=LEVEL     level of messages to catch/display.
                        Not set by default, so it depends on the root/parent log handler's effective level, where it is "WARNING" by
                        default.
  --log-format=LOG_FORMAT
                        log format as used by the logging module.
  --log-date-format=LOG_DATE_FORMAT
                        log date format as used by the logging module.
  --log-cli-level=LOG_CLI_LEVEL
                        cli logging level.
  --log-cli-format=LOG_CLI_FORMAT
                        log format as used by the logging module.
  --log-cli-date-format=LOG_CLI_DATE_FORMAT
                        log date format as used by the logging module.
  --log-file=LOG_FILE   path to a file when logging will be written to.
  --log-file-level=LOG_FILE_LEVEL
                        log file logging level.
  --log-file-format=LOG_FILE_FORMAT
                        log format as used by the logging module.
  --log-file-date-format=LOG_FILE_DATE_FORMAT
                        log date format as used by the logging module.
  --log-auto-indent=LOG_AUTO_INDENT
                        Auto-indent multiline messages passed to the logging module. Accepts true|on, false|off or an integer.

[pytest] ini-options in the first pytest.ini|tox.ini|setup.cfg file found:

  markers (linelist):   markers for test functions
  empty_parameter_set_mark (string):
                        default marker for empty parametersets
  norecursedirs (args): directory patterns to avoid for recursion
  testpaths (args):     directories to search for tests when no files or directories are given in the command line.
  filterwarnings (linelist):
                        Each line specifies a pattern for warnings.filterwarnings. Processed after -W/--pythonwarnings.
  usefixtures (args):   list of default fixtures to be used with this project
  python_files (args):  glob-style file patterns for Python test module discovery
  python_classes (args):
                        prefixes or glob names for Python test class discovery
  python_functions (args):
                        prefixes or glob names for Python test function and method discovery
  disable_test_id_escaping_and_forfeit_all_rights_to_community_support (bool):
                        disable string escape non-ascii characters, might cause unwanted side effects(use at your own risk)
  console_output_style (string):
                        console output: "classic", or with additional progress information ("progress" (percentage) | "count").
  xfail_strict (bool):  default for the strict parameter of xfail markers when not given explicitly (default: False)
  enable_assertion_pass_hook (bool):
                        Enables the pytest_assertion_pass hook.Make sure to delete any previously generated pyc cache files.
  junit_suite_name (string):
                        Test suite name for JUnit report
  junit_logging (string):
                        Write captured log messages to JUnit report: one of no|log|system-out|system-err|out-err|all
  junit_log_passing_tests (bool):
                        Capture log information for passing tests to JUnit report:
  junit_duration_report (string):
                        Duration time to report: one of total|call
  junit_family (string):
                        Emit XML for schema: one of legacy|xunit1|xunit2
  doctest_optionflags (args):
                        option flags for doctests
  doctest_encoding (string):
                        encoding used for doctest files
  cache_dir (string):   cache directory path.
  log_level (string):   default value for --log-level
  log_format (string):  default value for --log-format
  log_date_format (string):
                        default value for --log-date-format
  log_cli (bool):       enable log display during test run (also known as "live logging").
  log_cli_level (string):
                        default value for --log-cli-level
  log_cli_format (string):
                        default value for --log-cli-format
  log_cli_date_format (string):
                        default value for --log-cli-date-format
  log_file (string):    default value for --log-file
  log_file_level (string):
                        default value for --log-file-level
  log_file_format (string):
                        default value for --log-file-format
  log_file_date_format (string):
                        default value for --log-file-date-format
  log_auto_indent (string):
                        default value for --log-auto-indent
  faulthandler_timeout (string):
                        Dump the traceback of all threads if a test takes more than TIMEOUT seconds to finish.
  addopts (args):       extra command line options
  minversion (string):  minimally required pytest version
  required_plugins (args):
                        plugins that must be present for pytest to run

environment variables:
  PYTEST_ADDOPTS           extra command line options
  PYTEST_PLUGINS           comma-separated plugins to load during startup
  PYTEST_DISABLE_PLUGIN_AUTOLOAD set to disable plugin auto-loading
  PYTEST_DEBUG             set to enable debug tracing of pytest's internals


to see available markers type: pytest --markers
to see available fixtures type: pytest --fixtures
(shown according to specified file_or_dir or current dir if not specified; fixtures with leading '_' are only shown with the '-v' option
(venv) zydeMacBook-Air:learnpytest zy$ 

reglas de casos de uso de pytest

  • Los nombres de los archivos siguen a test_*.py y ***_test.py**

  • Las funciones comienzan con **test_**

  • Las clases comienzan con Test , los métodos comienzan con **test_ y no pueden tener métodos __init__**

  • Todos los paquetes deben tener un archivo **__init__.py**

  • Afirmar usando afirmar

Caso de prueba de pytest de ejecución de terminal

Hay tres formas de ejecutar el caso de prueba de pytest usando cmd:

  • pytest
  • py.prueba
  • pitón -m pytest

Si ejecuta el comando anterior en una carpeta sin parámetros, ejecutará todos los casos de uso calificados en la carpeta. Para conocer las condiciones, consulte Reglas de casos de uso de pytest .

Ejemplo:

El archivo de caso de uso test_aaa.py ya existe en la carpeta del proyecto learnpytest. Lo ejecutamos usando la línea de comando.

  • pytest
(venv) zydeMacBook-Air:learnpytest zy$ pytest
=========================================================== test session starts ============================================================
platform darwin -- Python 3.7.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/zy/PycharmProjects/learnpytest
collected 2 items                                                                                                                          

test_aaa.py ..                                                                                                                       [100%]

============================================================ 2 passed in 0.03s =============================================================
  • py.prueba
(venv) zydeMacBook-Air:learnpytest zy$ py.test
=========================================================== test session starts ============================================================
platform darwin -- Python 3.7.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/zy/PycharmProjects/learnpytest
collected 2 items                                                                                                                          

test_aaa.py ..                                                                                                                       [100%]

============================================================ 2 passed in 0.01s =============================================================
  • pitón -m pytest
(venv) zydeMacBook-Air:learnpytest zy$ python -m pytest
=========================================================== test session starts ============================================================
platform darwin -- Python 3.7.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/zy/PycharmProjects/learnpytest
collected 2 items                                                                                                                          

test_aaa.py ..                                                                                                                       [100%]

============================================================ 2 passed in 0.02s =============================================================

Ejecutar reglas de casos de uso

Hay dos archivos de casos de uso test_aaa.py y test_bbb.py en el directorio del proyecto learnpytest. La estructura del caso de uso es así:

-learnpytest
	-test_aaa.py
  	-class TestCase
    	-def test001
      -def test002
    -class TestCall
    	-def test001
  -test_bbb.py
  	-class TestLogin
    	-def test001
    -def test002

Ejecutar todos los casos de uso en un directorio determinado.

Si desea ejecutar todos los casos de uso directamente en este directorio, puede ingresar pytest u otros comandos (consulte Terminal para ejecutar casos de uso de pytest ). ¿Qué sucede si desea ejecutarlo en un directorio que no es de destino?

Utilice la ruta del directorio de destino de pytest

zydeMacBook-Air:~ zy$ pytest /Users/zy/PycharmProjects/learnpytest
============================= test session starts ==============================
platform darwin -- Python 3.7.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/zy
collected 5 items                                                              

PycharmProjects/learnpytest/test_aaa.py ...                              [ 60%]
PycharmProjects/learnpytest/test_bbb.py ..                               [100%]

============================== 5 passed in 0.03s ===============================

Ejecutar todos los casos de uso en un determinado archivo py

Para ejecutar todos los casos de prueba en test_aaa.py, use la ruta del archivo de destino de pytest

zydeMacBook-Air:~ zy$ pytest /Users/zy/PycharmProjects/learnpytest/test_aaa.py
============================= test session starts ==============================
platform darwin -- Python 3.7.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/zy
collected 3 items                                                              

PycharmProjects/learnpytest/test_aaa.py ...                              [100%]

============================== 3 passed in 0.02s ===============================

Haga coincidir casos de uso por palabra clave y ejecútelos

Esto ejecutará pruebas que contengan nombres que coincidan con la expresión de cadena dada, que incluye operadores de Python que
usan nombres de archivos , nombres de clases y nombres de funciones como variables.

Ejecute el caso sin 002 en el archivo test_bbb.py. (Agregamos el parámetro -s para imprimir información de depuración detallada y aclarar qué caso se ejecutó)

(venv) zydeMacBook-Air:learnpytest zy$ pytest -s -k "bbb and not 002"
=========================================================== test session starts ============================================================
platform darwin -- Python 3.7.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/zy/PycharmProjects/learnpytest
collected 5 items / 4 deselected / 1 selected                                                                                              

test_bbb.py 执行test001>>>>>>>>>
.

===================================================== 1 passed, 4 deselected in 0.02s ======================================================
(venv) zydeMacBook-Air:learnpytest zy$ 

Ejecute el caso sin 001 en la clase TestCase en el archivo test_aaa.py.

(venv) zydeMacBook-Air:learnpytest zy$ pytest -s -k "aaa and Case and not 001"
=========================================================== test session starts ============================================================
platform darwin -- Python 3.7.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/zy/PycharmProjects/learnpytest
collected 5 items / 4 deselected / 1 selected                                                                                              

test_aaa.py 执行test002>>>>>>>>>
.

===================================================== 1 passed, 4 deselected in 0.02s ======================================================

Ejecutado por nodo

A cada prueba recopilada se le asigna un nodeid único, que consta del nombre del archivo del módulo seguido del especificador
del nombre de la clase parametrizada, el nombre de la función y los parámetros, separados por el símbolo ::. El siguiente ejemplo agrega el parámetro -s para una presentación más clara.

Ejecute el método de prueba test001 en la clase TestCase en el módulo test_aaa.py: pytest test_aaa.py::TestCase::test001

(venv) zydeMacBook-Air:learnpytest zy$ pytest -s test_aaa.py::TestCase::test001
=========================================================== test session starts ============================================================
platform darwin -- Python 3.7.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/zy/PycharmProjects/learnpytest
collected 1 item                                                                                                                           

test_aaa.py 执行test001>>>>>>>>>
.

============================================================ 1 passed in 0.01s =============================================================
(venv) zydeMacBook-Air:learnpytest zy$ 

Ejecute la función de prueba test002 en el módulo test_bbb.py: pytest test_bbb.py::test002

(venv) zydeMacBook-Air:learnpytest zy$ pytest -s test_bbb.py::test002
=========================================================== test session starts ============================================================
platform darwin -- Python 3.7.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/zy/PycharmProjects/learnpytest
collected 1 item                                                                                                                           

test_bbb.py 执行test002>>>>>>>>>
.

============================================================ 1 passed in 0.01s =============================================================

Ejecute el caso de uso de la etiqueta de expresión de etiqueta

Para facilitar la visualización de este ejemplo, primero agregamos el decorador de marcas @pytest.mark.kk al método test002 en TestCase en test_aaa.py y al método test001 en TestCall.

import pytest
class TestCase:
    def test001(self):
        print("执行test001>>>>>>>>>")
        assert True

    @pytest.mark.kk
    def test002(self):
        print("执行test002>>>>>>>>>")
        assert True

class TestCall:
    @pytest.mark.kk
    def test001(self):
        print("执行test001>>>>>>>>>")
        assert True

A continuación usamos la expresión mark para ejecutar la expresión kk mark: pytest -m kk

(venv) zydeMacBook-Air:learnpytest zy$ pytest -s -m kk
=========================================================== test session starts ============================================================
platform darwin -- Python 3.7.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/zy/PycharmProjects/learnpytest
collected 5 items / 3 deselected / 2 selected                                                                                              

test_aaa.py 执行test002>>>>>>>>>
.执行test001>>>>>>>>>
.

============================================================= warnings summary =============================================================
test_aaa.py:7
  /Users/zy/PycharmProjects/learnpytest/test_aaa.py:7: PytestUnknownMarkWarning: Unknown pytest.mark.kk - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.kk

test_aaa.py:13
  /Users/zy/PycharmProjects/learnpytest/test_aaa.py:13: PytestUnknownMarkWarning: Unknown pytest.mark.kk - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.kk

-- Docs: https://docs.pytest.org/en/stable/warnings.html
=============================================== 2 passed, 3 deselected, 2 warnings in 0.03s ================================================

ejecutar desde el paquete

Para facilitar las pruebas, creamos un nuevo paquete en el directorio del proyecto learnpytest, el nombre del paquete es jjj y colocamos test_aaa.py en jjj.

(venv) zydeMacBook-Air:learnpytest zy$ pytest --pyargs jjj.test_aaa
=========================================================== test session starts ============================================================
platform darwin -- Python 3.7.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/zy/PycharmProjects/learnpytest
collected 0 items                                                                                                                          

========================================================== no tests ran in 0.00s ===========================================================
ERROR: module or package not found: jjj.test_aaa (missing __init__.py?)

La prueba falló.

Detener la prueba cuando se encuentre una falla

Usamos el parámetro -x para configurar la prueba para que se detenga cuando encuentre una falla.

archivo test_aaa.py

import pytest
class TestCase:
    def test001(self):
        print("执行test001>>>>>>>>>")
        assert True

    @pytest.mark.kk
    def test002(self):
        print("执行test002>>>>>>>>>")
        assert False

class TestCall:
    @pytest.mark.kk
    def test001(self):
        print("执行test001>>>>>>>>>")
        assert True

En el archivo anterior, test002 no se pudo ejecutar. Usamos el parámetro -x para ejecutar la configuración y detener la ejecución cuando encontramos una falla. Los resultados son los siguientes: se puede ver que test001 en TestCall no se ejecuta después de ejecutar test002.

(venv) zydeMacBook-Air:learnpytest zy$ pytest -x test_aaa.py 
=========================================================== test session starts ============================================================
platform darwin -- Python 3.7.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/zy/PycharmProjects/learnpytest
collected 3 items                                                                                                                          

test_aaa.py .F

================================================================= FAILURES =================================================================
_____________________________________________________________ TestCase.test002 _____________________________________________________________

self = <test_aaa.TestCase object at 0x107373150>

    @pytest.mark.kk
    def test002(self):
        print("执行test002>>>>>>>>>")
>       assert False
E       assert False

test_aaa.py:10: AssertionError
----------------------------------------------------------- Captured stdout call -----------------------------------------------------------
执行test002>>>>>>>>>
============================================================= warnings summary =============================================================
test_aaa.py:7
  /Users/zy/PycharmProjects/learnpytest/test_aaa.py:7: PytestUnknownMarkWarning: Unknown pytest.mark.kk - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.kk

test_aaa.py:13
  /Users/zy/PycharmProjects/learnpytest/test_aaa.py:13: PytestUnknownMarkWarning: Unknown pytest.mark.kk - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.kk

-- Docs: https://docs.pytest.org/en/stable/warnings.html
========================================================= short test summary info ==========================================================
FAILED test_aaa.py::TestCase::test002 - assert False
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
================================================= 1 failed, 1 passed, 2 warnings in 0.09s ==================================================

Extensión: exploremos si la ejecución se detendrá cuando se encuentre un error pero la afirmación del caso de uso sea exitosa.

archivo test_aaa.py

import pytest
class TestCase:
    def test001(self):
        print("执行test001>>>>>>>>>")
        assert True

    @pytest.mark.kk
    def test002(self):
        print("执行test002>>>>>>>>>")
        a=[1,2]
        print(a[3])
        assert True

class TestCall:
    @pytest.mark.kk
    def test001(self):
        print("执行test001>>>>>>>>>")
        assert True
(venv) zydeMacBook-Air:learnpytest zy$ pytest -s -x test_aaa.py 
=========================================================== test session starts ============================================================
platform darwin -- Python 3.7.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/zy/PycharmProjects/learnpytest
collected 3 items                                                                                                                          

test_aaa.py 执行test001>>>>>>>>>
.执行test002>>>>>>>>>
F

================================================================= FAILURES =================================================================
_____________________________________________________________ TestCase.test002 _____________________________________________________________

self = <test_aaa.TestCase object at 0x10d8c3110>

    @pytest.mark.kk
    def test002(self):
        print("执行test002>>>>>>>>>")
        a=[1,2]
>       print(a[3])
E       IndexError: list index out of range

test_aaa.py:11: IndexError
============================================================= warnings summary =============================================================
test_aaa.py:7
  /Users/zy/PycharmProjects/learnpytest/test_aaa.py:7: PytestUnknownMarkWarning: Unknown pytest.mark.kk - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.kk

test_aaa.py:15
  /Users/zy/PycharmProjects/learnpytest/test_aaa.py:15: PytestUnknownMarkWarning: Unknown pytest.mark.kk - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.kk

-- Docs: https://docs.pytest.org/en/stable/warnings.html
========================================================= short test summary info ==========================================================
FAILED test_aaa.py::TestCase::test002 - IndexError: list index out of range
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
================================================= 1 failed, 1 passed, 2 warnings in 0.08s ==================================================

Se puede ver que incluso si afirmar es Verdadero, si ocurre un error durante la ejecución del programa, se considerará fallido y se cumplirá el escenario de detener la prueba cuando falle.

Detener la prueba cuando el número de errores alcance un número específico

Utilice el parámetro **–maxfail** para configurar la prueba para que se detenga cuando el número de casos de error alcance un número específico.

import pytest
class TestCase:
    def test001(self):
        print("执行test001>>>>>>>>>")
        assert False

    @pytest.mark.kk
    def test002(self):
        print("执行test002>>>>>>>>>")
        assert False

class TestCall:
    @pytest.mark.kk
    def test001(self):
        print("执行test001>>>>>>>>>")
        assert True

Arriba está el archivo test_aaa.py. Las afirmaciones test001 y test002 del archivo fallaron.

Configuramos la prueba para que se detenga cuando el número de errores llegue a 2.

(venv) zydeMacBook-Air:learnpytest zy$ pytest --maxfail=2 test_aaa.py 
=========================================================== test session starts ============================================================
platform darwin -- Python 3.7.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/zy/PycharmProjects/learnpytest
collected 3 items                                                                                                                          

test_aaa.py FF

================================================================= FAILURES =================================================================
_____________________________________________________________ TestCase.test001 _____________________________________________________________

self = <test_aaa.TestCase object at 0x108dbd4d0>

    def test001(self):
        print("执行test001>>>>>>>>>")
>       assert False
E       assert False

test_aaa.py:5: AssertionError
----------------------------------------------------------- Captured stdout call -----------------------------------------------------------
执行test001>>>>>>>>>
_____________________________________________________________ TestCase.test002 _____________________________________________________________

self = <test_aaa.TestCase object at 0x108dc4790>

    @pytest.mark.kk
    def test002(self):
        print("执行test002>>>>>>>>>")
>       assert False
E       assert False

test_aaa.py:10: AssertionError
----------------------------------------------------------- Captured stdout call -----------------------------------------------------------
执行test002>>>>>>>>>
============================================================= warnings summary =============================================================
test_aaa.py:7
  /Users/zy/PycharmProjects/learnpytest/test_aaa.py:7: PytestUnknownMarkWarning: Unknown pytest.mark.kk - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.kk

test_aaa.py:13
  /Users/zy/PycharmProjects/learnpytest/test_aaa.py:13: PytestUnknownMarkWarning: Unknown pytest.mark.kk - is this a typo?  You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/stable/mark.html
    @pytest.mark.kk

-- Docs: https://docs.pytest.org/en/stable/warnings.html
========================================================= short test summary info ==========================================================
FAILED test_aaa.py::TestCase::test001 - assert False
FAILED test_aaa.py::TestCase::test002 - assert False
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 2 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
====================================================== 2 failed, 2 warnings in 0.09s =======================================================

Cómo ejecutar pytest en pycharm

Método 1: modificar el ejecutor predeterminado del proyecto

Preferencias->Herramientas->herramientas integradas de Python->el ejecutor de pruebas predeterminado está configurado en pytest, como se muestra a continuación
Insertar descripción de la imagen aquí

Después de guardar, haga clic derecho en el nombre del módulo del caso de uso y haga clic en Ejecutar 'pytest in xxx.py' , como se muestra a continuación.
Insertar descripción de la imagen aquí

resultado de la operación:

Testing started at 下午1:58 ...
/Users/zy/PycharmProjects/learnpytest/venv/bin/python "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pycharm/_jb_pytest_runner.py" --path /Users/zy/PycharmProjects/learnpytest/test_bbb.py
Launching pytest with arguments /Users/zy/PycharmProjects/learnpytest/test_bbb.py in /Users/zy/PycharmProjects/learnpytest

============================= test session starts ==============================
platform darwin -- Python 3.7.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 -- /Users/zy/PycharmProjects/learnpytest/venv/bin/python
cachedir: .pytest_cache
rootdir: /Users/zy/PycharmProjects/learnpytest
collecting ... collected 2 items

test_bbb.py::TestLogin::test001 PASSED                                   [ 50%]执行test001>>>>>>>>>

test_bbb.py::test002 PASSED                                              [100%]执行test002>>>>>>>>>


============================== 2 passed in 0.02s ===============================

Process finished with exit code 0

Método 2: escribir código de ejecución de pytest en pycharm

Importe el módulo pytest usando pytest.main()

import pytest
if __name__ == '__main__':
    pytest.main(['test_bbb.py'])

Resultados de:

/Users/zy/PycharmProjects/learnpytest/venv/bin/python /Users/zy/PycharmProjects/learnpytest/test_bbb.py
============================= test session starts ==============================
platform darwin -- Python 3.7.4, pytest-6.2.5, py-1.11.0, pluggy-1.0.0
rootdir: /Users/zy/PycharmProjects/learnpytest
collected 2 items

test_bbb.py ..                                                           [100%]

============================== 2 passed in 0.01s ===============================

Process finished with exit code 0

Supongo que te gusta

Origin blog.csdn.net/u011090984/article/details/122105976
Recomendado
Clasificación