unittest --- unit testing framework

Source code: Lib/unittest/__init__.py


(If you are already familiar with the concept of testing, you may want to jump directly to this section Assertion methods.) a>

unittest The unit testing framework is inspired by JUnit and has a similar style to mainstream unit testing frameworks in other languages. It supports test automation, configuration sharing and shutdown code testing. Supports aggregating test cases into test sets and making tests independent of the reporting framework.

To achieve this,unittest supports some important concepts through an object-oriented approach.

Test scaffolding

test fixture represents the preparation work required to conduct one or more tests, as well as all related cleanup operations. For example, this might include creating a temporary or proxy database, directory, or starting a server process.

test case

A test case is an independent unit of testing. It checks the response when specific data is entered. unittest provides a base class: TestCase for creating new test cases.

test suite

test suite is a series of test cases, or a test suite, or both. It is used to archive tests that need to be executed together.

test runner

test runner is a component used to execute and output test results. The runner may use a graphical interface, a text interface, or return a specific value representing the results of running the test.

See

doctest --- Document test module

Another testing module with a completely different style.

Simple Smalltalk Testing: Handy Mode

Kent Beck's original paper on a testing framework using patterns shared by unittest .

pytest

A third-party unit testing framework that provides lightweight syntax for writing tests, for example: assert func(10) == 42.

Python testing tool classification

A comprehensive list of Python testing tools, including testing frameworks and mock object libraries.

Testing in Python mailing list

A special interest group discussing testing and testing tools in Python.

Script in the Python source distribution Tools/unittestgui/unittestgui.py is a GUI tool for discovering and executing tests. This is mainly for the convenience of newbies to unit testing. In a production environment, it is recommended to use continuous integration systems such as BuildbotJenkinsGitHub Actions or AppVeyor to drive the testing process.

basic example

The unittest module provides a series of tools for creating and running tests. This paragraph demonstrates a small subset of these tools, but is sufficient to meet the needs of most users.

Here is a short piece of code to test three string methods:

import unittest

class TestStringMethods(unittest.TestCase):

    def test_upper(self):
        self.assertEqual('foo'.upper(), 'FOO')

    def test_isupper(self):
        self.assertTrue('FOO'.isupper())
        self.assertFalse('Foo'.isupper())

    def test_split(self):
        s = 'hello world'
        self.assertEqual(s.split(), ['hello', 'world'])
        # check that s.split fails when the separator is not a string
        with self.assertRaises(TypeError):
            s.split(2)

if __name__ == '__main__':
    unittest.main()

Inherit unittest.TestCase to create a test case. The above three independent tests are methods of three classes, and the names of these methods all start with test . This naming convention tells the test runner which methods of the class represent tests.

The key to each test is: call assertEqual() to check the expected output; call  assertTrue() or assertFalse() to verify a condition; call assertRaises () to verify that a specific exception was thrown. Using these methods instead of the assert statement allows the test runner to aggregate all test results and generate a report of the results.

By setUp() and tearDown() methods , you can set the instructions that need to be executed before the test starts and after it is completed. This is described in more detail in Organizing your test code .

The final code block demonstrates a simple way to run the test. unittest.main() Provides a command line interface for test scripts. When running the test script from the command line, the above script generates output in the following format:

...
----------------------------------------------------------------------
Ran 3 tests in 0.000s

OK

Add -v parameters when calling the test script to make unittest.main() display more detailed information, producing output of the form:

test_isupper (__main__.TestStringMethods.test_isupper) ... ok
test_split (__main__.TestStringMethods.test_split) ... ok
test_upper (__main__.TestStringMethods.test_upper) ... ok

----------------------------------------------------------------------
Ran 3 tests in 0.001s

OK

The above example demonstrates the most commonly used features in unittest and is sufficient to meet many daily testing needs. The remainder of the document details the complete features of the framework.

Changed in version 3.11: Returning a value from a test method (instead of returning the default None value) is now deprecated.

command line interface

The unittest module can run tests for modules, classes, and individual test methods via the command line:

python -m unittest test_module1 test_module2
python -m unittest test_module.TestClass
python -m unittest test_module.TestClass.test_method

You can pass in a module name, a class or method name, or any combination of these.

Likewise, test modules can be specified via file paths:

python -m unittest tests/test_something.py

This allows you to specify test modules using the shell's filename completion. The specified file still needs to be importable as a module. The path is converted into a module name by removing '.py' and converting the delimiter to '.'. If you need to execute a test file that cannot be imported as a module, you need to execute the test file directly.

You can get more detailed (more redundant) information when running tests by adding the -v parameter.

python -m unittest -v test_module

When running without parameters, start exploratory testing

python -m unittest

For getting a list of command line options:

python -m unittest -h

Changed in version 3.2: In earlier versions, only running standalone test methods was supported, not modules and classes.

Command line options

unittest supports these command-line options:

-b, --buffer

While the test is running, the standard output and standard error streams are put into buffers. The runtime output of a successful test will be discarded; when the test fails, the output of the test run will be displayed normally, and the error will be added to the test failure message.

-c, --catch

While a test is running, Control-C waits for the current test to complete and reports the results of the executed test upon completion. When Control-C is pressed again, the usual KeyboardInterrupt exception is raised.

See Signal Handling for the functions that provide this functionality.

-f, --failfast

Stop running the test on the first error or failure.

-k

Only run test methods and classes that match the pattern or substring. This option can be used multiple times, in which case all test cases matching any given pattern will be included.

Patterns containing wildcard characters (*) use fnmatch.fnmatchcase() to match test names. Additionally, the match is case-sensitive.

The pattern matches the full name of the test method imported by the test loader.

For example, -k foo can match foo_tests.SomeTest.test_something and bar_tests.SomeTest.test_foo , but not bar_tests.FooTest.test_something .

--locals

Show local variables in traceback.

--durations N

Display the N slowest test cases (N=0 means all).

New features in version 3.2: Added command line options -b-c and -f .

New feature in version 3.5: Command line options --locals .

New feature in version 3.7: Command line options -k .

New feature in version 3.12: Command line options --durations.

The command line can also be used for exploratory testing to run all or a subset of a project's tests.

exploratory testing

3.2 New features.

unittest supports simple test discovery. To be compatible with test discovery, all test files must be importable from the project's top-level directory module or  Packages (this means their filenames must be valid identifiers).

Exploratory testing is implemented in TestLoader.discover() but can also be used from the command line. Its basic usage from the command line is as follows:

cd project_directory
python -m unittest discover

Remark

For convenience, python -m unittest is equivalent to python -m unittest discover . If you need to pass parameters to exploratory testing, you must explicitly use the discover subcommand.

discover The following options are available:

-v, --verbose

Output the results in more detail.

-s, --start-directory directory

The directory to start searching from (the default value is the current directory . ).

-p, --pattern pattern

Pattern used to match test files (default is test*.py ).

-t, --top-level-directory directory

Specify the top-level directory of the project (usually the directory where you started).

-s , -p and -t< /span> options can be passed in as positional parameters in order. The following two commands are equivalent:

python -m unittest discover -s project_directory -p "*_test.py"
python -m unittest discover project_directory "*_test.py"

Just like you can pass in a path, it is also feasible to pass in a package name as the starting directory, such as myproject.subpackage.test . The package name you provide will be imported, and its location in the file system will be used as the starting directory.

careful

Exploratory testing loads tests by importing tests. After finding all test files in the starting directory you specified, it converts the paths to package names and imports them. For example, foo/bar/baz.py will be imported as foo.bar.baz .

If you have a globally installed package and try to do exploratory testing on a copy of this package, you may start importing from the wrong place. If this happens, the test prints a warning and exits.

If you use a package name instead of a path as the starting directory, the search will assume it imports the directory you want, so you won't get a warning.

Test modules and packages can customize the loading and search of tests through load_tests protocol .

Changed in version 3.4: Testing found support for namespace packages in the initial directory. Note that you also need to specify the top-level directory (for example: python -m unittest discover -s root/namespace -t root).

Changed in version 3.11: Python 3.11 dropped namespace package support. It has been unavailable since Python 3.7. Both the starting directory and subdirectories containing the tests must be regular packages with __init__.py files.

The directory containing the home directory can still be a namespace package. In this case, you need to explicitly specify the starting and target directories with dotted package names. For example:

# proj/  <-- current directory
#   namespace/
#     mypkg/
#       __init__.py
#       test_mypkg.py

python -m unittest discover -s namespace.mypkg -t .

Organize your test code

The building unit of unit testing is test cases : independent solutions that include execution conditions and correctness checks. In unittest , test cases are represented as instances of unittest.TestCase . Write your own tests by writing a subclass of TestCase or using FunctionTestCase Example.

The test code for a A TestCase instance must be completely self-contained, so it can run independently, or in any combination with any number of other Test cases are run together.

The simplest subclass of TestCase needs to implement a test method (for example, a method named starting with test ) to execute specific test code :

import unittest

class DefaultWidgetSizeTestCase(unittest.TestCase):
    def test_default_widget_size(self):
        widget = Widget('The widget')
        self.assertEqual(widget.size(), (50, 50))

Note that in order to test something, we use one of the assert* methods provided by the TestCase base class. If the test fails, an exception will be raised with an explanatory message, and unittest will identify the test case as a failure. Any other exceptions will be treated as errors.

There may be multiple tests with the same pre-operation at the same time. We can disassemble the pre-operation of the test from the test code and implement the test pre-operation method setUp( ) . When running tests, the testing framework automatically calls the prepend method for each individual test.

import unittest

class WidgetTestCase(unittest.TestCase):
    def setUp(self):
        self.widget = Widget('The widget')

    def test_default_widget_size(self):
        self.assertEqual(self.widget.size(), (50,50),
                         'incorrect default size')

    def test_widget_resize(self):
        self.widget.resize(100,150)
        self.assertEqual(self.widget.size(), (100,150),
                         'wrong size after resize')

Remark

The order in which multiple tests are run is determined by the built-in string sorting method that sorts the test names.

When the test is running, if the setUp() method throws an exception, the test framework will think that an error has occurred in the test, so the test method will not is run.

Similarly, we provide a tearDown() method to clean up after the test method is run.

import unittest

class WidgetTestCase(unittest.TestCase):
    def setUp(self):
        self.widget = Widget('The widget')

    def tearDown(self):
        self.widget.dispose()

If setUp() runs successfully, regardless of whether the test method is successful or not, it will run tearDown()  .

The environment in which such a test code runs is called test fixture . A new TestCase instance serves as a test scaffold for running individual test methods. When running each test, setUp() , tearDown() and __init__() will be called once.

It is recommended that you group your TestCase tests according to the functionality they test. unittest provides test suite for this purpose: TestSuite class of =5>unittest  is a representative. Normally, calling unittest.main() will correctly find and execute all tests grouped by TestCase under this module.

However, if you need to customize your test suite, you can organize your tests as follows:

def suite():
    suite = unittest.TestSuite()
    suite.addTest(WidgetTestCase('test_default_widget_size'))
    suite.addTest(WidgetTestCase('test_widget_resize'))
    return suite

if __name__ == '__main__':
    runner = unittest.TextTestRunner()
    runner.run(suite())

You can place test cases and test suites in the same module as the code being tested (e.g. widget.py), but place the test code in a separate module (e.g.  a>test_widget.py) has several advantages.

  • Test modules can be called independently from the command line.

  • Easier to strip test code out of distributed code.

  • Reduce the temptation to modify the test code to pass the test without a good reason.

  • Test code should be modified less often than the code being tested.

  • The code under test can be refactored more easily.

  • Modules written in C language must be written as a separate module anyway, why not be consistent?

  • If the testing strategy changes, there is no need to modify the source code.

Reuse existing test code

Some users hope to directly use unittest to run existing test code without converting each existing test function into a TestCase Subclass.

Therefore, unittest provides the FunctionTestCase class. This subclass of TestCase can be used to package existing test functions and supports setting pre- and post-functions.

Suppose there is a test function:

def testSomething():
    something = makeSomething()
    assert something.name is not None
    # ...

An equivalent test case can be created as follows, where pre- and post-methods are optional.

testcase = unittest.FunctionTestCase(testSomething,
                                     setUp=makeSomethingDB,
                                     tearDown=deleteSomethingDB)

Remark

Use FunctionTestCase to quickly convert existing tests to unittest< a i=4> test, but it is not recommended that you do this. Taking the time to inherit from TestCase will make it easier to refactor your tests later.

In some cases, existing tests may be written using the doctest module. If so, doctest provides a DocTestSuite class that can be derived from an existing doctest automatically builds unittest.TestSuite use cases in the test.

Skipping tests and expected failures

3.1 New features.

Unittest supports skipping individual or entire groups of test cases. It also supports marking tests as "expected to fail" tests. These bad tests will fail, but will not count towards the failure of TestResult .

To skip a test just use skip() decorator or its conditional version, Use TestCase.skipTest() inside setUp() , or Directly triggers SkipTest.

The basic usage of skip testing is as follows:

class MyTestCase(unittest.TestCase):

    @unittest.skip("demonstrating skipping")
    def test_nothing(self):
        self.fail("shouldn't happen")

    @unittest.skipIf(mylib.__version__ < (1, 3),
                     "not supported in this library version")
    def test_format(self):
        # Tests that work for only a certain version of the library.
        pass

    @unittest.skipUnless(sys.platform.startswith("win"), "requires Windows")
    def test_windows_support(self):
        # windows specific testing code
        pass

    def test_maybe_skipped(self):
        if not external_resource_available():
            self.skipTest("external resource not available")
        # test code that depends on the external resource
        pass

When running the above test example in verbose mode, the program output is as follows:

test_format (__main__.MyTestCase.test_format) ... skipped 'not supported in this library version'
test_nothing (__main__.MyTestCase.test_nothing) ... skipped 'demonstrating skipping'
test_maybe_skipped (__main__.MyTestCase.test_maybe_skipped) ... skipped 'external resource not available'
test_windows_support (__main__.MyTestCase.test_windows_support) ... skipped 'requires Windows'

----------------------------------------------------------------------
Ran 4 tests in 0.005s

OK (skipped=4)

The writing method of skipping test class is similar to the writing method of skipping test method:

@unittest.skip("showing class skipping")
class MySkippedTestCase(unittest.TestCase):
    def test_not_run(self):
        pass

TestCase.setUp() can also skip the test. Can be used to skip subsequent tests if required resources are not available.

Use the expectedFailure() decorator to indicate that this test is expected to fail. :

class ExpectedFailureTestCase(unittest.TestCase):
    @unittest.expectedFailure
    def test_fail(self):
        self.assertEqual(1, 0, "broken")

You can easily write a decorator that calls skip() when testing as a custom skip test decorator. The following decorator will skip testing unless the object passed in has specific properties:

def shipUnlessHasattr(obj, attr):
    if hasattr(obj, attr):
        return lambda func: func
    return unittest.skip("{!r} doesn't have {!r}".format(obj, attr))

The following decorators and exceptions implement the functions of skipping tests and expecting failures:

@unittest.skip(reason)

Skips tests decorated with this decorator. reason is the reason why the test was skipped.

@unittest.skipIf(conditionreason)

When condition is true, skip the decorated test.

@unittest.skipUnless(conditionreason)

Skip the decorated test unless condition is true.

@unittest.expectedFailure

marks the test as an expected failure or error. A test is considered successful if it fails or if an error occurs in the test function itself (rather than in a test fixture method). If the test passes, it is considered a test failure.

exception unittest.SkipTest(reason)

Raise this exception to skip a test.

Generally speaking, you can use TestCase.skipTest() or one of the skip test decorators to implement the function of skipping tests. , instead of raising this exception directly.

Skipped tests setUp() and tearDown()< a i=4> will not be run. setUpClass() and tearDownClass() of skipped classes are not will be run. The skipped modules' setUpModule() and tearDownModule() will not be run.

Use subtests to differentiate test iterations

3.4 New features.

When the differences between your several tests are very small, such as only some formal parameters are different, unittest allows you to use subTest() The context manager distinguishes them within a test method body.

For example, the following test:

class NumbersTest(unittest.TestCase):

    def test_even(self):
        """
        Test that numbers between 0 and 5 are all even.
        """
        for i in range(0, 6):
            with self.subTest(i=i):
                self.assertEqual(i % 2, 0)

The following output can be obtained:

======================================================================
FAIL: test_even (__main__.NumbersTest.test_even) (i=1)
Test that numbers between 0 and 5 are all even.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "subtests.py", line 11, in test_even
    self.assertEqual(i % 2, 0)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 1 != 0

======================================================================
FAIL: test_even (__main__.NumbersTest.test_even) (i=3)
Test that numbers between 0 and 5 are all even.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "subtests.py", line 11, in test_even
    self.assertEqual(i % 2, 0)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 1 != 0

======================================================================
FAIL: test_even (__main__.NumbersTest.test_even) (i=5)
Test that numbers between 0 and 5 are all even.
----------------------------------------------------------------------
Traceback (most recent call last):
  File "subtests.py", line 11, in test_even
    self.assertEqual(i % 2, 0)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: 1 != 0

Without subtests, the program will stop after the first error. And because the value of i is not displayed, the error is harder to find.

======================================================================
FAIL: test_even (__main__.NumbersTest.test_even)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "subtests.py", line 32, in test_even
    self.assertEqual(i % 2, 0)
AssertionError: 1 != 0

Classes and functions

This section provides an in-depth introduction to the API of unittest .

test case

class unittest.TestCase(methodName='runTest')

Instances of the TestCase class represent logical test units in the unittest universe. This class is intended to be used as a base class and specific tests will be implemented by its entity subclasses. This class implements the interfaces required by the test runner to allow it to drive tests, and implements methods that can be used by the test code to detect and report various types of failures.

Each TestCase instance will run a unit of the underlying method: namely named methodName method.  Or reimplement the default methodName is used, you do not need to modify TestCase method. In most scenarios where runTest()

Changed in version 3.2: TestCase No need to provide methodName can be successfully instantiated. This makes it easier to experiment with TestCase from the interactive interpreter.

Instances of TestCase provide three sets of methods: one for running tests, another for the tested implementation to check conditions and report failures, and some query methods for to collect information about the test itself.

The first set of methods (for running tests) are:

setUp()

Method called for test preparation. This method will be called before calling the test method; except AssertionError or SkipTest, any exception thrown by this method will be treated as an error rather than a test failure. The default implementation will do nothing.

tearDown()

Method that is called immediately after the test method is called and the results are logged. This method will still be called even when the test method throws an exception, so implementations in subclasses will need to pay special attention to checking the internal state. Any exception raised by this method except AssertionError or SkipTest Will be treated as an additional bug rather than a test failure (thus increasing the total number of bug reports). This method will only be called if setUp() executes successfully, regardless of the result of the test method. The default implementation will do nothing.

setUpClass()

A class method that is called before a test in a separate class is run. setUpClass will be called on the class as the only parameter and must use classmethod() decorator:

@classmethod
def setUpClass(cls):
    ...

See Class and Module Fixtures for more detailed instructions.

3.2 New features.

tearDownClass()

A class method that is called after a single class's tests have finished running. tearDownClass will be called on the class as the only parameter and must use classmethod() decorator:

@classmethod
def tearDownClass(cls):
    ...

See Class and Module Fixtures for more detailed instructions.

3.2 New features.

run(result=None)

Run the test, collecting the results into the result passed in as TestResult< a i=4>. If result is omitted or is None, a temporary result object is created (by calling < a i=8>defaultTestResult() method) and use it. The result object will be returned to the caller of run() .

The same effect can be achieved by simply calling the TestCase instance.

Changed in version 3.3: Previous versions of run would not return results. No calls are made to the instance either.

skipTest(reason)

Calling this method during the execution of a test method or setUp() will skip the current test. See Skipping tests and expected failures for details.

3.1 New features.

subTest(msg=None**params)

Returns a context manager to execute a block of code within as a subtest. Optional msg and params are displayed if the subtest fails arbitrary values ​​so that you can clearly identify them.

A test case can contain any number of subtest statements, and they can be nested arbitrarily.

See Using subtests to differentiate test iterations for more detailed information.

3.4 New features.

debug()

Run tests without collecting results. This allows exceptions raised by the test to be passed to the caller and can be used to support running the test in a debugger.

The TestCase class provides some assertion methods for checking and reporting failures. The following table lists the most commonly used methods (see the other tables below for more assertion methods):

method

Check object

Introduction version

assertEqual(a, b)

a == b

assertNotEqual(a, b)

a != b

assertTrue(x)

bool(x) is True

assertFalse(x)

bool(x) is False

assertIs(a, b)

a is b

3.1

assertIsNot(a, b)

a is not b

3.1

assertIsNone(x)

x is None

3.1

assertIsNotNone(x)

x is not None

3.1

assertIn(a, b)

a in b

3.1

assertNotIn(a, b)

a not in b

3.1

assertIsInstance(a, b)

isinstance(a, b)

3.2

assertNotIsInstance(a, b)

not isinstance(a, b)

3.2

These assertion methods support the msg parameter. If this parameter is specified, it will be used as the error message when the test fails (another See longMessage). Please note that the msg keyword argument is passed to assertRaises(), < /span>  The prerequisite is that they must be used as context managers. assertWarnsRegex()assertWarns()assertRaisesRegex()

assertEqual(firstsecondmsg=None)

Test whether first and second are equal. If the two values ​​are not equal, the test will fail.

Furthermore, if first and second are of exactly the same type Types that are identical and belong to list, tuple, dict, set, frozenset or str or belong to a subclass registered through addTypeEqualityFunc() will call type-specific Equality functions to generate more useful default error messages (see also Type-specific method list).

Changed in version 3.1: Added automatic calling of type-specific equality judgment functions.

Changed in version 3.2: Added assertMultiLineEqual() as the default type equality test function for comparing strings.

assertNotEqual(firstsecondmsg=None)

Test whether first and second are not equal. If the two values ​​compare equal, the test will fail.

assertTrue(exprmsg=None)

assertFalse(exprmsg=None)

Test expr whether it is true (or false).

Please note that this is equivalent to bool(expr) is True and not to expr is True (for the latter use assertIs(expr, True)). This method should also be avoided when more specialized methods exist (e.g. assertEqual(a, b) should be used instead of assertTrue(a == b)), as they will provide more useful information when the test fails. wrong information.

assertIs(firstsecondmsg=None)

assertIsNot(firstsecondmsg=None)

Test first and second are (or are not) same object.

3.1 New features.

assertIsNone(exprmsg=None)

assertIsNotNone(exprmsg=None)

测试 expr here (or not) None.

3.1 New features.

assertIn(membercontainermsg=None)

assertNotIn(membercontainermsg=None)

Test member is (or is not) container member.

3.1 New features.

assertIsInstance(objclsmsg=None)

assertNotIsInstance(objclsmsg=None)

test obj is (or is not) cls ( This parameter can be a class or a tuple containing classes, i.e. isinstance() accepts parameters) instances. To test whether a specified type is present, use assertIs(type(obj), cls).

3.2 New features.

You can also use the following methods to check the generation of exceptions, warnings, and log messages:

method

Check object

Introduction version

assertRaises(exc, fun, *args, **kwds)

fun(*args, **kwds) 发了 exc

assertRaisesRegex(exc, r, fun, *args, **kwds)

fun(*args, **kwds) raises exc and the message can be matched with the regular expression r Match

3.1

assertWarns(warn, fun, *args, **kwds)

fun(*args, **kwds) 发了 warn

3.2

assertWarnsRegex(warn, r, fun, *args, **kwds)

fun(*args, **kwds) raises warn and the message can be matched with the regular expression r Match

3.2

assertLogs(logger, level)

with code block uses minimum levellogger > Level writing log  

3.4

assertNoLogs(logger, level)

with The code block is not in

Use the smallest level level to write logs on logger 

3.10

assertRaises(exceptioncallable*args**kwds)

assertRaises(exception*msg=None)

Test when callable is passed to assertRaises() Whether an exception is thrown when the positional or keyword argument is called. The test passes if exception is thrown, reports an error if another exception is raised, or fails if no exception is raised. To catch any one of a set of exceptions, pass in a tuple containing multiple exception classes as exception .

If only exception and possible msg parameter, returns a context manager so that the code under test can be written inline rather than as a function:

with self.assertRaises(SomeException):
    do_something()

When used as a context manager, assertRaises() accepts additional keyword arguments  msg.

The context manager will store the caught exception object in its exception attribute. This is useful when additional checks need to be performed on the exception being thrown:

with self.assertRaises(SomeException) as cm:
    do_something()

the_exception = cm.exception
self.assertEqual(the_exception.error_code, 3)

Changed in version 3.1: Added the ability to use assertRaises() as a context manager.

Changed in version 3.2: Added exception attribute.

Changed in version 3.3: Added msg keyword argument for use as a context manager.

assertRaisesRegex(exceptionregexcallable*args**kwds)

assertRaisesRegex(exceptionregex*msg=None)

Similar to assertRaises() but also tests regex Whether to match the string representation of the exception that was thrown. regex can be a regular expression object or a string containing a regular expression to be supplied to re.search( ) is used. For example:

self.assertRaisesRegex(ValueError, "invalid literal for.*XYZ'$",
                       int, 'XYZ')

or:

with self.assertRaisesRegex(ValueError, 'literal'):
   int('XYZ')

New version 3.1 feature: added with method name assertRaisesRegexp .

Changed in version 3.2: Renamed to assertRaisesRegex().

Changed in version 3.3: Added msg keyword argument for use as a context manager.

assertWarns(warningcallable*args**kwds)

assertWarns(warning*msg=None)

Test when callable is passed to assertWarns() Whether a warning is triggered when positional or keyword arguments are called. If warning is triggered, the test passes, otherwise the test fails. If any exception is thrown, an error will be reported. To capture any one of a set of warnings, pass in a tuple containing multiple warning classes as warnings .

If only warning and possible msg parameter, returns a context manager so that the code under test can be written inline rather than as a function:

with self.assertWarns(SomeWarning):
    do_something()

When used as a context manager, assertWarns() accepts additional keyword arguments  msg.

The context manager will save the captured warning object in its warning attribute, and the source code line that triggered the warning in filename and < /span> lineno attribute. This is useful when additional checks need to be performed on captured warnings:

with self.assertWarns(SomeWarning) as cm:
    do_something()

self.assertIn('myfile.py', cm.filename)
self.assertEqual(320, cm.lineno)

This method works regardless of whether a warning filter is in place when called.

3.2 New features.

Changed in version 3.3: Added msg keyword argument for use as a context manager.

assertWarnsRegex(warningregexcallable*args**kwds)

assertWarnsRegex(warningregex*msg=None)

Similar to assertWarns() but also tests regex Whether to match the message text of the triggered warning. regex can be a regular expression object or a string containing a regular expression to be supplied to re.search( ) is used. For example:

self.assertWarnsRegex(DeprecationWarning,
                      r'legacy_function\(\) is deprecated',
                      legacy_function, 'XYZ')

or:

with self.assertWarnsRegex(RuntimeWarning, 'unsafe frobnicating'):
    frobnicate('/etc/passwd')

3.2 New features.

Changed in version 3.3: Added msg keyword argument for use as a context manager.

assertLogs(logger=Nonelevel=None)

A context manager that tests whether there is at least one log for at least the specified logger or its sub-object. a i=3>level  or above.

If logger is given, it shall be a logging.Logger< a i=4> object or a str that specifies the logger name. Defaults to the root logger, which will capture all messages not blocked by non-propagating successor loggers.

If level is given, it shall be a numeric logging level or its string equivalent (e.g. "ERROR" or logging.ERROR). The default is logging.INFO.

If inside the with block emit at least one message related to logger and level If the message matches the condition, the test passes, otherwise the test fails.

The object returned by the context manager is a logging helper that logs matching log messages. It has two properties:

records

A list of log messages matched by logging.LogRecord objects.

output

A list composed of str objects, the content is the formatted output of the matched message.

Example:

with self.assertLogs('foo', level='INFO') as cm:
    logging.getLogger('foo').info('first message')
    logging.getLogger('foo.bar').error('second message')
self.assertEqual(cm.output, ['INFO:foo:first message',
                             'ERROR:foo.bar:second message'])

3.4 New features.

assertNoLogs(logger=Nonelevel=None)

A context manager that tests whether nothing at least the specified logger or its sub-objects =3>level  or above level messages.

If logger is given, it shall be a logging.Logger< a i=4> object or a str that specifies the logger name. Defaults to the root logger, which will capture all messages.

If level is given, it shall be a numeric logging level or its string equivalent (e.g. "ERROR" or logging.ERROR). The default is logging.INFO.

Unlike assertLogs() , the context manager will not return any object.

New features in version 3.10.

There are other methods available to perform more specialized checks, such as:

method

Check object

Introduction version

assertAlmostEqual(a, b)

round(a-b, 7) == 0

assertNotAlmostEqual(a, b)

round(a-b, 7) != 0

assertGreater(a, b)

a > b

3.1

assertGreaterEqual(a, b)

a >= b

3.1

assertLess(a, b)

a < b

3.1

assertLessEqual(a, b)

a <= b

3.1

assertRegex(s, r)

r.search(s)

3.1

assertNotRegex(s, r)

not r.search(s)

3.2

assertCountEqual(a, b)

a and b have the same number of the same elements, regardless of their order.

3.2

assertAlmostEqual(firstsecondplaces=7msg=Nonedelta=None)

assertNotAlmostEqual(firstsecondplaces=7msg=Nonedelta=None)

Test whether first and second are almost equal, compare The standard is to calculate the difference and round to the number of decimal digits specified by places (default is 7 digits), and then compare it to zero. Please note that this method rounds the result value to the specified decimal digits (i.e. equivalent to round()  function) instead of significant digits.

Provided as a result delta but not places 则 < /span> . delta there must be a difference between the two. second 和 first

provided at the same time delta Japanese places Command  a>TypeError.

Changed in version 3.2: assertAlmostEqual() will automatically treat nearly equal objects as equal. And if the objects are equal, assertNotAlmostEqual() will automatically fail the test. Added delta keyword parameter.

assertGreater(firstsecondmsg=None)

assertGreaterEqual(firstsecondmsg=None)

assertLess(firstsecondmsg=None)

assertLessEqual(firstsecondmsg=None)

Test whether first is >, >=, < or <= according to the method name second. If not, the test fails:

>>>

>>> self.assertGreaterEqual(3, 4)
AssertionError: "3" unexpectedly not greater than or equal to "4"

3.1 New features.

assertRegex(textregexmsg=None)

assertNotRegex(textregexmsg=None)

Test a regex Search for a match (or not) Text. If there is no match, the error message will contain the matching pattern and the text* (or the *text for which the partial match failed). regex can be a regular expression object or can be used with re.search() A string containing a regular expression.

New version 3.1 feature: added with method name assertRegexpMatches .

Changed in version 3.2: Method assertRegexpMatches() has been renamed to assertRegex().

New features in version 3.2: assertNotRegex()

assertCountEqual(firstsecondmsg=None)

Test whether the sequences first and second contain the same elements, regardless of their order. When differences exist, an error message is generated listing the differences between the two sequences.

Duplicate elements  will not appear between first and < /span>  but also works for sequences containing unhashable objects.  is ignored in the comparison. It checks whether each element appears the same number of times in both sequences. Equivalent to: secondassertEqual(Counter(list(first)), Counter(list(second)))

3.2 New features.

The assertEqual() method dispatches equality checks for objects of the same type to different type-specific methods. These methods are already implemented by most built-in types, but new methods can also be registered using addTypeEqualityFunc() :

addTypeEqualityFunc(typeobjfunction)

Register a type-specific method called by assertEqual() to check that exactly the same typeobj  (not a subclass) whether two objects are equal. function must accept two positional parameters and a third msg=None keyword parameter, just like assertEqual()  Like that. It MUST raise self.failureException(msg) when an inequality between the first two formal parameters is detected - which may also provide useful information and Explain the reason for the inequality in detail in the error message.

3.1 New features.

The following are the different types of comparison methods assertEqual() automatically selects. Normally there is no need to call these methods directly in the test.

method

used for comparison

Introduction version

assertMultiLineEqual(a, b)

string

3.1

assertSequenceEqual(a, b)

sequence

3.1

assertListEqual(a, b)

list

3.1

assertTupleEqual(a, b)

tuple

3.1

assertSetEqual(a, b)

gather

3.1

assertDictEqual(a, b)

dictionary

3.1

assertMultiLineEqual(firstsecondmsg=None)

Test whether the multi-line string first matches the string second Equal. Highlighting of the difference between the two strings will be included in the error message when not equal. This method will be used by default when performing string comparison via assertEqual() .

3.1 New features.

assertSequenceEqual(firstsecondmsg=Noneseq_type=None)

Tests whether two sequences are equal. If seq_type is provided, then first and second must be instances of seq_type otherwise it will cause failure. If the two sequences are not equal, an error message is constructed showing the difference between the two.

This method will not be called directly by assertEqual() , but it will be used to implement  assertListEqual() and assertTupleEqual().

3.1 New features.

assertListEqual(firstsecondmsg=None)

assertTupleEqual(firstsecondmsg=None)

Tests whether two lists or tuples are equal. If not equal, an error message is constructed showing the difference. An error will also be raised if a formal parameter is of an incorrect type. These methods will be used by default when performing list or tuple comparisons via assertEqual() .

3.1 New features.

assertSetEqual(firstsecondmsg=None)

测试两个集合是否相等。 如果不相等,则会构造一个错误消息来列出两者之间的差异。 此方法会在通过 assertEqual() 进行集合或冻结集合比较时默认被使用。

如果 first 或 second 没有 set.difference() 方法则测试失败。

3.1 新版功能.

assertDictEqual(firstsecondmsg=None)

测试两个字典是否相等。 如果不相等,则会构造一个错误消息来显示两个字典的差异。 此方法会在对 assertEqual() 的调用中默认被用来进行字典的比较。

3.1 新版功能.

最后 TestCase 还提供了以下的方法和属性:

fail(msg=None)

无条件地发出测试失败消息,附带错误消息 msg 或 None

failureException

这个类属性给出测试方法所引发的异常。 如果某个测试框架需要使用专门的异常,并可能附带额外的信息,则必须子类化该类以便与框架“正常互动”。 这个属性的初始值为 AssertionError

longMessage

这个类属性决定当将一个自定义失败消息作为 msg 参数传给一个失败的 assertXYY 调用时会发生什么。默认值为 True。 在此情况下,自定义消息会被添加到标准失败消息的末尾。 当设为 False 时,自定义消息会替换标准消息。

类设置可以通过在调用断言方法之前将一个实例属性 self.longMessage 赋值为 True 或 False 在单个测试方法中进行重载。

类设置会在每个测试调用之前被重置。

3.1 新版功能.

maxDiff

这个属性控制来自在测试失败时报告 diffs 的断言方法的 diffs 输出的最大长度。 它默认为 80*8 个字符。 这个属性所影响的断言方法有 assertSequenceEqual() (包括所有委托给它的序列比较方法), assertDictEqual() 以及 assertMultiLineEqual()

将 maxDiff 设为 None 表示不限制 diffs 的最大长度。

3.2 新版功能.

测试框架可使用下列方法来收集测试的有关信息:

countTestCases()

返回此测试对象所提供的测试数量。 对于 TestCase 实例,该数量将总是为 1

defaultTestResult()

返回此测试类所要使用的测试结果类的实例(如果未向 run() 方法提供其他结果实例)。

对于 TestCase 实例,该返回值将总是为 TestResult 的实例;TestCase 的子类应当在有必要时重载此方法。

id()

返回一个标识指定测试用例的字符串。 该返回值通常为测试方法的完整名称,包括模块名和类名。

shortDescription()

返回测试的描述,如果未提供描述则返回 None。 此方法的默认实现将在可用的情况下返回测试方法的文档字符串的第一行,或者返回 None

在 3.1 版更改: 在 3.1 中已修改此方法将测试名称添加到简短描述中,即使存在文档字符串。 这导致了与单元测试扩展的兼容性问题因而在 Python 3.2 中将添加测试名称操作改到 TextTestResult 中。

addCleanup(function/*args**kwargs)

在 tearDown() 之后添加了一个要调用的函数来清理测试期间所使用的资源。 函数将按它们被添加的相反顺序被调用 (LIFO)。 它们在调用时将附带它们被添加时传给 addCleanup() 的任何参数和关键字参数。

如果 setUp() 失败,即意味着 tearDown() 未被调用,则已添加的任何清理函数仍将被调用。

3.1 新版功能.

enterContext(cm)

进入所提供的 context manager。 如果成功,还会将其 __exit__() 方法作为使用 addCleanup() 的清理函数并返回 __enter__() 方法的结果。

3.11 新版功能.

doCleanups()

此方法会在 tearDown() 之后,或者如果 setUp() 引发了异常则会在 setUp() 之后被调用。

它将负责调用由 addCleanup() 添加的所有清理函数。 如果你需要在 tearDown() 之前 调用清理函数则可以自行调用 doCleanups()

doCleanups() 每次会弹出清理函数栈中的一个方法,因此它可以在任何时候被调用。

3.1 新版功能.

classmethod addClassCleanup(function/*args**kwargs)

在Add a function to be called after tearDownClass() 之后添加了一个要调用的函数来清理测试类运行期间所使用的资源。 函数将按它们被添加的相反顺序被调用 (LIFO)。 它们在调用时将附带它们被添加时传给 addClassCleanup() 的任何参数和关键字参数。

如果 setUpClass() 失败,即意味着 tearDownClass() 未被调用,则已添加的任何清理函数仍将被调用。

3.8 新版功能.

classmethod enterClassContext(cm)

进入所提供的 context manager。 如果成功,还会将其 __exit__() 方法作为使用 addClassCleanup() 的清理函数并返回 __enter__() 方法的结果。

3.11 新版功能.

classmethod doClassCleanups()

此方法会在 tearDownClass() 之后无条件地被调用,或者如果 setUpClass() 引发了异常则会在 setUpClass() 之后被调用。

它将负责访问由 addClassCleanup() 添加的所有清理函数。 如果你需要在 tearDownClass() 之前 调用清理函数则可以自行调用 doClassCleanups()

doClassCleanups() 每次会弹出清理函数栈中的一个方法,因此它在任何时候被调用。

3.8 新版功能.

class unittest.IsolatedAsyncioTestCase(methodName='runTest')

这个类提供了与 TestCase 类似的 API 并也接受协程作为测试函数。

3.8 新版功能.

coroutine asyncSetUp()

为测试预备而调用的方法。 此方法会在 setUp() 之后被调用。 此方法将在调用测试方法之前立即被调用;除了 AssertionError 或 SkipTest,此方法所引发的任何异常都将被视为错误而非测试失败。 默认的实现将不做任何事情。

coroutine asyncTearDown()

在测试方法被调用并记录结果之后立即被调用的方法。 此方法会在 tearDown() 之前被调用。 此方法即使在测试方法引发异常时仍会被调用,因此子类中的实现将需要特别注意检查内部状态。 除 AssertionError 或 SkipTest 外,此方法所引发的任何异常都将被视为额外的错误而非测试失败(因而会增加总计错误报告数)。 此方法将只在 asyncSetUp() 成功执行时被调用,无论测试方法的结果如何。 默认的实现将不做任何事情。

addAsyncCleanup(function/*args**kwargs)

此方法接受一个可被用作清理函数的协程。

coroutine enterAsyncContext(cm)

进入所提供的 asynchronous context manager。 如果成功,还会将其 __aexit__() 方法作为使用 addAsyncCleanup() 的清理函数并返回 __aenter__() 方法的结果。

3.11 新版功能.

run(result=None)

设置一个新的事件循环来运行测试,将结果收集至作为 result 传入的 TestResult。 如果 result 被省略或为 None,则会创建一个临时的结果对象(通过调用 defaultTestResult() 方法)并使用它。 结果对象会被返回给 run() 的调用方。 在测试结束时事件循环中的所有任务都将被取消。

一个显示先后顺序的例子:

from unittest import IsolatedAsyncioTestCase

events = []


class Test(IsolatedAsyncioTestCase):


    def setUp(self):
        events.append("setUp")

    async def asyncSetUp(self):
        self._async_connection = await AsyncConnection()
        events.append("asyncSetUp")

    async def test_response(self):
        events.append("test_response")
        response = await self._async_connection.get("https://example.com")
        self.assertEqual(response.status_code, 200)
        self.addAsyncCleanup(self.on_cleanup)

    def tearDown(self):
        events.append("tearDown")

    async def asyncTearDown(self):
        await self._async_connection.close()
        events.append("asyncTearDown")

    async def on_cleanup(self):
        events.append("cleanup")

if __name__ == "__main__":
    unittest.main()

在运行测试之后,events 将会包含 ["setUp", "asyncSetUp", "test_response", "asyncTearDown", "tearDown", "cleanup"]

class unittest.FunctionTestCase(testFuncsetUp=NonetearDown=Nonedescription=None)

这个类实现了 TestCase 的部分接口,允许测试运行方驱动测试,但不提供可被测试代码用来检查和报告错误的方法。 这个类被用于创建使用传统测试代码的测试用例,允许它被集成到基于 unittest 的测试框架中。

分组测试

class unittest.TestSuite(tests=())

这个类代表对单独测试用例和测试套件的聚合。 这个类提供给测试运行方所需的接口以允许其像任何其他测试用例一样运行。 运行一个 TestSuite 实例与对套件执行迭代来逐一运行每个测试的效果相同。

如果给出了 tests,则它必须是一个包含单独测试用例的可迭代对象或是将被用于初始构建测试套件的其他测试套件。 还有一些附加的方法会被提供用来在随后向测试集添加测试用例和测试套件。

TestSuite 对象的行为与 TestCase 对象很相似,区别在于它们并不会真正实现一个测试。 它们会被用来将测试聚合为多个要同时运行的测试分组。 还有一些附加的方法会被用来向 TestSuite 实例添加测试:

addTest(test)

向测试套件添加 TestCase 或 TestSuite

addTests(tests)

将来自包含 TestCase 和 TestSuite 实例的可迭代对象的所有测试添加到这个测试套件。

这等价于对 tests 进行迭代,并为其中的每个元素调用 addTest()

TestSuite 与 TestCase 共享下列方法:

run(result)

运行与这个套件相关联的测试,将结果收集到作为 result 传入的测试结果对象中。 请注意与 TestCase.run() 的区别,TestSuite.run() 必须传入结果对象。

debug()

运行与这个套件相关联的测试而不收集结果。 这允许测试所引发的异常被传递给调用方并可被用于支持在调试器中运行测试。

countTestCases()

返回此测试对象所提供的测试数量,包括单独的测试和子套件。

__iter__()

由 TestSuite 分组的测试总是可以通过迭代来访问。 其子类可以通过重载 __iter__() 来惰性地提供测试。 请注意此方法可在单个套件上多次被调用(例如在计数测试或相等性比较时),为此在 TestSuite.run() 之前重复迭代所返回的测试对于每次调用迭代都必须相同。 在 TestSuite.run() 之后,调用方不应继续访问此方法所返回的测试,除非调用方使用重载了 TestSuite._removeTestAtIndex() 的子类来保留对测试的引用。

在 3.2 版更改: 在较早的版本中 TestSuite 会直接访问测试而不是通过迭代,因此只重载 __iter__() 并不足以提供所有测试。

在 3.4 版更改: 在较早的版本中 TestSuite 会在 TestSuite.run() 之后保留对每个 TestCase 的引用。 其子类可以通过重载 TestSuite._removeTestAtIndex() 来恢复此行为。

在 TestSuite 对象的典型应用中,run() 方法是由 TestRunner 发起调用而不是由最终用户测试来控制。

加载和运行测试

class unittest.TestLoader

TestLoader 类可被用来基于类和模块创建测试套件。 通常,没有必要创建该类的实例;unittest 模块提供了一个可作为 unittest.defaultTestLoader 共享的实例。 但是,使用子类或实例允许对某些配置属性进行定制。

TestLoader 对象具有下列属性:

errors

由在加载测试期间遇到的非致命错误组成的列表。 在任何时候都不会被加载方重围。 致命错误是通过相关方法引发一个异常来向调用方发出信号的。 非致命错误也是由一个将在运行时引发原始错误的合成测试来提示的。

3.5 新版功能.

TestLoader 对象具有下列方法:

loadTestsFromTestCase(testCaseClass)

返回一个包含在 TestCase 所派生的 testCaseClass 中的所有测试用例的测试套件。

会为每个由 getTestCaseNames() 指明的方法创建一个测试用例实例。 在默认情况下这些都是以 test 开头的方法名称。 如果 getTestCaseNames() 不返回任何方法,但 runTest() 方法已被实现,则会为该方法创建一个单独的测试用例。

loadTestsFromModule(module*pattern=None)

返回包含在给定模块中的所有测试用例的测试套件。 此方法会在 module 中搜索从派生自 TestCase 的类并为该类定义的每个测试方法创建一个类实例。

备注

虽然使用 TestCase 所派生的类的层级结构可以方便地共享配置和辅助函数,但在不打算直接实例化的基类上定义测试方法并不能很好地配合此方法使用。 不过,当配置有差异并且定义在子类当中时这样做还是有用处的。

如果一个模块提供了 load_tests 函数则它将被调用以加载测试。 这允许模块自行定制测试加载过程。 这就称为 load_tests protocol。 pattern 参数会被作为传给 load_tests 的第三个参数。

在 3.2 版更改: 添加了对 load_tests 的支持。

在 3.5 版更改: 增加了对仅限关键字参数 pattern 的支持。

在 3.12 版更改: 未写入文档的非正式 use_load_tests 形参已被移除。

loadTestsFromName(namemodule=None)

返回由给出了字符串形式规格描述的所有测试用例组成的测试套件。

描述名称 name 是一个“带点号的名称”,它可以被解析为一个模块、一个测试用例类、一个测试用例类内部的测试方法、一个 TestSuite 实例,或者一个返回 TestCase 或 TestSuite 实例的可调用对象。 这些检查将按在此列出的顺序执行;也就是说,一个可能的测试用例类上的方法将作为“一个测试用例内部的测试方法”而非作为“一个可调用对象”被选定。

举例来说,如果你有一个模块 SampleTests,其中包含一个派生自 TestCase 的类 SampleTestCase,其中包含三个测试方法 (test_one()test_two() 和 test_three())。 则描述名称 'SampleTests.SampleTestCase' 将使此方法返回一个测试套件,它将运行全部三个测试方法。 使用描述名称 'SampleTests.SampleTestCase.test_two' 将使它返回一个测试套件,它将仅运行 test_two() 测试方法。 描述名称可以指向尚未被导入的模块和包;它们将作为附带影响被导入。

本模块可以选择相对于给定的 module 来解析 name

在 3.5 版更改: 如果在遍历 name 时发生了 ImportError 或 AttributeError 则在运行时引发该错误的合成测试将被返回。 这些错误被包括在由 self.errors 所积累的错误中。

loadTestsFromNames(namesmodule=None)

类似于 loadTestsFromName(),但是接受一个名称序列而不是单个名称。 返回值是一个测试套件,它支持为每个名称所定义的所有测试。

getTestCaseNames(testCaseClass)

返回由 testCaseClass 中找到的方法名称组成的已排序的序列;这应当是 TestCase 的一个子类。

discover(start_dirpattern='test*.py'top_level_dir=None)

通过从指定的开始目录向其子目录递归来找出所有测试模块,并返回一个包含该结果的 TestSuite 对象。 只有与 pattern 匹配的测试文件才会被加载。 (使用 shell 风格的模式匹配。) 只有可导入的模块名称(即有效的 Python 标识符)将会被加载。

所有测试模块都必须可以从项目的最高层级上导入。 如果起始目录不是最高层级目录则必须单独指明最高层级目录。

如果导入某个模块失败,比如因为存在语法错误,则会将其记录为单独的错误并将继续查找模块。 如果导入失败是因为引发了 SkipTest,则会将其记录为跳过而不是错误。

如果找到了一个包(即包含名为 __init__.py 的文件的目录),则将在包中查找 load_tests 函数。 如果存在此函数则将对其执行调用 package.load_tests(loader, tests, pattern)。 测试发现操作会确保在执行期间仅检查测试一次,即使 load_tests 函数本身调用了 loader.discover 也是如此。.

如果 load_tests 存在则发现操作 不会 对包执行递归处理,load_tests 将负责加载包中的所有测试。is responsible for loading all tests in the package.

模式特意地不被当作 loader 属性来保存以使包能够自己继续执行发现操作。 top_level_dir 则会被保存以使 load_tests 不需要将此参数传入到 loader.discover()

start_dir 可以是一个带点号的名称或是一个目录。

3.2 新版功能.

在 3.4 版更改: 在导入时引发 SkipTest 的模块会被记录为跳过,而不是错误。

在 3.4 版更改: start_dir 可以是一个 命名空间包

在 3.4 版更改: 路径在被导入之前会先被排序以使得执行顺序保持一致,即使下层文件系统的顺序不是取决于文件名的。

在 3.5 版更改: 现在 load_tests 会检查已找到的包,无论它们的路径是否与 pattern 匹配,因为包名称是无法与默认的模式匹配的。

在 3.11 版更改: start_dir 不可以为 命名空间包。 它自 Python 3.7 开始已不可用而 Python 3.11 正式将其移除。

TestLoader 的下列属性可通过子类化或在实例上赋值来配置:

testMethodPrefix

给出将被解读为测试方法的方法名称的前缀的字符串。 默认值为 'test'

This affects getTestCaseNames() and all the loadTestsFrom* methods.

sortTestMethodsUsing

Function to be used to compare method names when sorting them in getTestCaseNames() and all the loadTestsFrom* methods.

suiteClass

根据一个测试列表来构造测试套件的可调用对象。 不需要结果对象上的任何方法。 默认值为 TestSuite 类。

This affects all the loadTestsFrom* methods.

testNamePatterns

由 Unix shell 风格通配符的测试名称模式组成的列表,供测试方法进行匹配以包括在测试套件中 (参见 -k 选项)。

如果该属性不为 None (默认值),则将要包括在测试套件中的所有测试方法都必须匹配该列表中的某个模式。 请注意匹配总是使用 fnmatch.fnmatchcase(),因此不同于传给 -k 选项的模式,简单的子字符串模式将必须使用 * 通配符来进行转换。

This affects all the loadTestsFrom* methods.

3.7 新版功能.

class unittest.TestResult

这个类被用于编译有关哪些测试执行成功而哪些失败的信息。

存放一组测试的结果的 TestResult 对象。 TestCase 和 TestSuite 类将确保结果被正确地记录;测试创建者无须担心如何记录测试的结果。

建立在 unittest 之上的测试框架可能会想要访问通过运行一组测试所产生的 TestResult 对象用来报告信息;TestRunner.run() 方法是出于这个目的而返回 TestResult 实例的。

TestResult 实例具有下列属性,在检查运行一组测试的结果的时候很有用处。

errors

一个包含 TestCase 实例和保存了格式化回溯信息的字符串 2 元组的列表。 每个元组代表一个引发了非预期的异常的测试。

failures

A list containing 2-tuples of TestCase instances and strings holding formatted tracebacks. Each tuple represents a test where a failure was explicitly signalled using the assert* methods.

skipped

一个包含 2-tuples of TestCase 实例和保存了跳过测试原因的字符串 2 元组的列表。

3.1 新版功能.

expectedFailures

一个包含 TestCase 实例和保存了格式化回溯信息的 2 元组的列表。 每个元组代表测试用例的一个已预期的失败或错误。

unexpectedSuccesses

一个包含被标记为已预期失败,但却测试成功的 TestCase 实例的列表。

collectedDurations

一个包含测试用例名称和代表所运行的每个测试所用时间的浮点数 2 元组的列表。

3.12 新版功能.

shouldStop

当测试的执行应当被 stop() 停止时则设为 True

testsRun

目前已运行的测试的总数量。

buffer

如果设为真值,sys.stdout 和 sys.stderr 将在 startTest() 和 stopTest() 被调用之间被缓冲。 被收集的输出将仅在测试失败或发生错误时才会被回显到真正的 sys.stdout 和 sys.stderr。 任何输出还会被附加到失败/错误消息中。

3.2 新版功能.

failfast

如果设为真值则 stop() 将在首次失败或错误时被调用,停止测试运行。

3.2 新版功能.

tb_locals

如果设为真值则局部变量将被显示在回溯信息中。

3.5 新版功能.

wasSuccessful()

如果当前所有测试都已通过则返回 True,否则返回 False

在 3.4 版更改: 如果有任何来自测试的 unexpectedSuccesses 被 expectedFailure() 装饰器所标记则返回 False

stop()

此方法可被调用以提示正在运行的测试集要将 shouldStop 属性设为 True 来表示其应当被中止。 TestRunner 对象应当认同此旗标并返回而不再运行任何额外的测试。

例如,该特性会被 TextTestRunner 类用来在当用户从键盘发出一个中断信号时停止测试框架。 提供了 TestRunner 实现的交互式工具也可通过类似方式来使用该特性。

TestResult 类的下列方法被用于维护内部数据结构,并可在子类中被扩展以支持额外的报告需求。 这特别适用于构建支持在运行测试时提供交互式报告的工具。

startTest(test)

当测试用例 test 即将运行时被调用。

stopTest(test)

在测试用例 test 已经执行后被调用,无论其结果如何。

startTestRun()

在任何测试被执行之前被调用一次。

3.1 新版功能.

stopTestRun()

在所有测试被执行之后被调用一次。

3.1 新版功能.

addError(testerr)

当测试用例 test 引发了非预期的异常时将被调用。 err 是一个元组,其形式与 sys.exc_info() 的返回值相同: (type, value, traceback)

默认实现会将一个元组 (test, formatted_err) 添加到实例的 errors 属性,其中 formatted_err 是派生自 err 的已格式化回溯信息。

addFailure(testerr)

当测试用例 test 发出了失败信号时将被调用。 err 是一个元组,其形式与 sys.exc_info() 的返回值相同: (type, value, traceback)

默认实现会将一个元组 (test, formatted_err) 添加到实例的 failures 属性,其中 formatted_err 是派生自 err 的已格式化回溯信息。

addSuccess(test)

当测试用例 test 成功时被调用。

默认实现将不做任何操作。

addSkip(testreason)

当测试用例 test 被跳过时将被调用。 reason 是给出的跳过测试的理由。

默认实现会将一个元组 (test, reason) 添加到实例的 skipped 属性。

addExpectedFailure(testerr)

当测试用例 test 失败或发生错误,但是使用了 expectedFailure() 装饰器来标记时将被调用。

默认实现会将一个元组 (test, formatted_err) 添加到实例的 expectedFailures 属性,其中 formatted_err 是派生自 err 的已格式化回溯信息。

addUnexpectedSuccess(test)

当测试用例 test 使用了was marked with the expectedFailure() 装饰器来标记,但是却执行成功时将被调用。

默认实现会将该测试添加到实例的 unexpectedSuccesses 属性。

addSubTest(testsubtestoutcome)

当一个子测试结束时将被调用。 test 是对应于该测试方法的测试用例。 subtest 是一个描述该子测试的 TestCase 实例。

如果 outcome 为 None,则该子测试执行成功。 否则,它将失败并引发一个异常,outcome 是一个元组,其形式与 sys.exc_info() 的返回值相同: (type, value, traceback)

默认实现在测试结果为成功时将不做任何事,并会将子测试的失败记录为普通的失败。

3.4 新版功能.

addDuration(testelapsed)

在测试用例结束时被调用。 elapsed 是以秒数表示的时间,并且它包括执行清理函数的时间。

3.12 新版功能.

class unittest.TextTestResult(streamdescriptionsverbosity*durations=None)

供 TextTestRunner 使用的 TestResult 的具体实现。 子类应当接受 **kwargs 以确保在接口改变时的兼容性。

3.2 新版功能.

3.12 新版功能: 增加了 durations 关键字参数。

unittest.defaultTestLoader

用于分享的 TestLoader 类实例。 如果不需要自制 TestLoader,则可以使用该实例而不必重复创建新的实例。

class unittest.TextTestRunner(stream=Nonedescriptions=Trueverbosity=1failfast=Falsebuffer=Falseresultclass=Nonewarnings=None*tb_locals=Falsedurations=None)

一个将结果输出到流的基本测试运行器。 如果 stream 为默认的 None,则会使用 sys.stderr 作为输出流。 这个类具有一些配置形参,但实际上都非常简单。 运行测试套件的图形化应用程序应当提供替代实现。 这样的实现应当在添加新特性到 unittest 时接受 **kwargs 作为修改构造运行器的接口。

在默认情况下该运行器将显示 DeprecationWarningPendingDeprecationWarningResourceWarning 和 ImportWarning 即使它们 默认会被忽略。 此行为可使用 Python 的 -Wd 或 -Wa 选项 并将 warnings 保持为 None 来覆盖 (参见 警告控制)。

在 3.2 版更改: 增加了 warnings 形参。

在 3.2 版更改: 默认流会在实例化而不是在导入时被设为 sys.stderr

在 3.5 版更改: 增加了 tb_locals 形参。

在 3.12 版更改: 增加了 durations 形参。

_makeResult()

此方法将返回由 run() 使用的 TestResult 实例。 它不应当被直接调用,但可在子类中被重载以提供自定义的 TestResult

_makeResult() 会实例化传给 TextTestRunner 构造器的 resultclass 参数所指定的类或可迭代对象。 如果没有提供 resultclass 则默认为 TextTestResult。 结果类会使用以下参数来实例化:

stream, descriptions, verbosity

run(test)

此方法是 TextTestRunner 的主要公共接口。 此方法接受一个 TestSuite 或 TestCase 实例。 通过调用 _makeResult() 创建 TestResult 来运行测试并将结果打印到标准输出。

unittest.main(module='__main__'defaultTest=Noneargv=NonetestRunner=NonetestLoader=unittest.defaultTestLoaderexit=Trueverbosity=1failfast=Nonecatchbreak=Nonebuffer=Nonewarnings=None)

从 module 加载一组测试并运行它们的命令行程序;这主要是为了让测试模块能方便地执行。 此函数的最简单用法是在测试脚本末尾包括下列行:

if __name__ == '__main__':
    unittest.main()

你可以通过传入冗余参数运行测试以获得更详细的信息:

if __name__ == '__main__':
    unittest.main(verbosity=2)

defaultTest 参数是要运行的单个测试名称,或者如果未通过 argv 指定任何测试名称则是包含多个测试名称的可迭代对象。 如果未指定或为 None 且未通过 argv 指定任何测试名称,则会运行在 module 中找到的所有测试。

argv 参数可以是传给程序的选项列表,其中第一个元素是程序名。 如未指定或为 None,则会使用 sys.argv 的值。

testRunner 参数可以是一个测试运行器类或是其已创建的一个实例。 在默认情况下 main 会调用 sys.exit() 并附带一个退出码来指明测试运行是成功 (0) 还是失败 (1)。 退出码 5 表示没有运行任何测试。

testLoader 参数必须是一个 TestLoader 实例,其默认值为 defaultTestLoader

main 支持通过传入 exit=False 参数以便在交互式解释器中使用。 这将在标准输出中显示结果而不调用 sys.exit():

>>>

>>> from unittest import main
>>> main(module='test_module', exit=False)

failfastcatchbreak 和 buffer 形参的效果与同名的 command-line options 一致。

warnings 参数指定在运行测试时所应使用的 警告过滤器。 如果未指定,则默认的 None 会在将 -W 选项传给 python 命令时被保留 (参见 警告控制),而在其他情况下将被设为 'default'

调用 main 实际上将返回一个 TestProgram 类的实例。 这会把测试运行结果保存为 result 属性。

在 3.1 版更改: 增加了 exit 形参。

在 3.2 版更改: 增加了 verbosityfailfastcatchbreakbuffer 和 warnings 形参。

在 3.4 版更改: defaultTest 形参被修改为也接受一个由测试名称组成的迭代器。

load_tests 协议

3.2 新版功能.

模块或包可以通过实现一个名为 load_tests 的函数来定制在正常测试运行或测试发现期间要如何从中加载测试。

如果一个测试模块定义了 load_tests 则它将被 TestLoader.loadTestsFromModule() 调用并传入下列参数:

load_tests(loader, standard_tests, pattern)

其中 pattern 会通过 loadTestsFromModule 传入。 它的默认值为 None

它应当返回一个 TestSuite

loader 是执行载入操作的 TestLoader 实例。 standard_tests 是默认要从该模块载入的测试。 测试模块通常只需从标准测试集中添加或移除测试。 第三个参数是在作为测试发现的一部分载入包时使用的。

一个从指定 TestCase 类集合中载入测试的 load_tests 函数看起来可能是这样的:

test_cases = (TestCase1, TestCase2, TestCase3)

def load_tests(loader, tests, pattern):
    suite = TestSuite()
    for test_class in test_cases:
        tests = loader.loadTestsFromTestCase(test_class)
        suite.addTests(tests)
    return suite

如果发现操作是在一个包含包的目录中开始的,不论是通过命令行还是通过调用 TestLoader.discover(),则将在包 __init__.py 中检查 load_tests。 如果不存在此函数,则发现将在包内部执行递归,就像它是另一个目录一样。 在其他情况下,包中测试的发现操作将留给 load_tests 执行,它将附带下列参数被调用:

load_tests(loader, standard_tests, pattern)

这应当返回代表包中所有测试的 TestSuite。 (standard_tests 将只包含从 __init__.py 获取的测试。)

因为模式已被传入 load_tests 所以包可以自由地继续(还可能修改)测试发现操作。 针对一个测试包的 '无操作' load_tests 函数看起来是这样的:

def load_tests(loader, standard_tests, pattern):
    # top level directory cached on loader instance
    this_dir = os.path.dirname(__file__)
    package_tests = loader.discover(start_dir=this_dir, pattern=pattern)
    standard_tests.addTests(package_tests)
    return standard_tests

在 3.5 版更改: 发现操作不会再检查包名称是否匹配 pattern,因为包名称不可能匹配默认的模式。

类与模块设定

类与模块设定是在 TestSuite 中实现的。 当测试套件遇到来自新类的测试时则来自之前的类(如果存在)的 tearDownClass() 会被调用,然后再调用来自新类的 setUpClass()

类似地如果测试是来自之前的测试的另一个模块则来自之前模块的 tearDownModule 将被运行,然后再运行来自新模块的 setUpModule

在所有测试运行完毕后最终的 tearDownClass 和 tearDownModule 将被运行。

请注意共享设定不适用于一些 [潜在的] 特性例如测试并行化并且它们会破坏测试隔离。 它们应当被谨慎地使用。

由 unittest 测试加载器创建的测试的默认顺序是将所有来自相同模块和类的测试归入相同分组。 这将导致 setUpClass / setUpModule (等) 对于每个类和模块都恰好被调用一次。 如果你将顺序随机化,以便使得来自不同模块和类的测试彼此相邻,那么这些共享的设定函数就可能会在一次测试运行中被多次调用。

共享的设定不适用与非标准顺序的套件。 对于不想支持共享设定的框架来说 BaseTestSuite 仍然可用。

如果在共享的设定函数中引发了任何异常则测试将被报告错误。 因为没有对应的测试实例,所以会创建一个 _ErrorHolder 对象(它具有与 TestCase 相同的接口)来代表该错误。 如果你只是使用标准 unittest 测试运行器那么这个细节并不重要,但是如果你是一个框架开发者那么这可能会有关系。

setUpClass 和 tearDownClass

这些必须被实现为类方法:

import unittest

class Test(unittest.TestCase):
    @classmethod
    def setUpClass(cls):
        cls._connection = createExpensiveConnectionObject()

    @classmethod
    def tearDownClass(cls):
        cls._connection.destroy()

如果你希望在基类上的 setUpClass 和 tearDownClass 被调用则你必须自己云调用它们。 在 TestCase 中的实现是空的。

如果在 setUpClass 中引发了异常则类中的测试将不会被运行并且 tearDownClass 也不会被运行。 跳过的类中的 setUpClass 或 tearDownClass 将不会被运行。 如果引发的异常是 SkipTest 异常则类将被报告为已跳过而非发生错误。

setUpModule 和 tearDownModule

这些应当被实现为函数:

def setUpModule():
    createConnection()

def tearDownModule():
    closeConnection()

如果在 setUpModule 中引发了异常则模块中的任何测试都将不会被运行并且 tearDownModule 也不会被运行。 如果引发的异常是 SkipTest 异常则模块将被报告为已跳过而非发生错误。

要添加即使在发生异常时也必须运行的清理代码,请使用 addModuleCleanup:

unittest.addModuleCleanup(function/*args**kwargs)

在 tearDownModule() 之后添加一个要调用的函数来清理测试类运行期间所使用的资源。 函数将按它们被添加的相反顺序被调用 (LIFO)。 它们在调用时将附带它们被添加时传给 addModuleCleanup() 的任何参数和关键字参数。

如果 setUpModule() 失败,即意味着 tearDownModule() 未被调用,则已添加的任何清理函数仍将被调用。

3.8 新版功能.

classmethod unittest.enterModuleContext(cm)

进入所提供的 context manager。 如果成功,还会将其 __exit__() 方法作为使用 addModuleCleanup() 的清理函数并返回 __enter__() 方法的结果。

3.11 新版功能.

unittest.doModuleCleanups()

此函数会在 tearDownModule() 之后无条件地被调用,或者如果 setUpModule() 引发了异常则会在 setUpModule() 之后被调用。

它将负责调用由It is responsible for calling all the cleanup functions added by addModuleCleanup() 添加的所有清理函数。 如果你需要在 tearDownModule() 之前 调用清理函数则可以自行调用 doModuleCleanups()

doModuleCleanups() 每次会弹出清理函数栈中的一个方法,因此它可以在任何时候被调用。

3.8 新版功能.

信号处理

3.2 新版功能.

unittest 的 -c/--catch 命令行选项,加上 unittest.main() 的 catchbreak 形参,提供了在测试运行期间处理 control-C 的更友好方式。 在捕获中断行为被启用时 control-C 将允许当前运行的测试能够完成,而测试运行将随后结束并报告已有的全部结果。 第二个 control-C 将会正常地引发 KeyboardInterrupt

处理 control-C 信号的句柄会尝试与安装了自定义 signal.SIGINT 处理句柄的测试代码保持兼容。 如果是 unittest 处理句柄而 不是 已安装的 signal.SIGINT 处理句柄被调用,即它被系统在测试的下层替换并委托处理,则它会调用默认的处理句柄。 这通常会是替换了已安装处理句柄并委托处理的代码所预期的行为。 对于需要禁用 unittest control-C 处理的单个测试则可以使用 removeHandler() 装饰器。

还有一些工具函数让框架开发者可以在测试框架内部启用 control-C 处理功能。

unittest.installHandler()

安装 control-C 处理句柄。 当接收到 signal.SIGINT 时(通常是响应用户按下 control-C)所有已注册的结果都会执行 stop() 调用。

unittest.registerResult(result)

Register a TestResult object for control-C processing. Registering a result will hold a weak reference to it, so this does not prevent the result from being garbage collected.

Registering if control-C is not enabled TestResult object will have no side effects, so the test framework will be processed regardless of whether it is enabled All can unconditionally register all results they create independently.

unittest.removeResult(result)

Remove a registered result. Once the result is removed stop() will no longer be called on the result object in response to control-C.

unittest.removeHandler(function=None)

When called without any arguments this function removes an installed control-C handler. This function can also be used as a test decorator to temporarily remove handlers while the test is executed:

@unittest.removeHandler
def test_signal_handling(self):
    ...

Guess you like

Origin blog.csdn.net/TalorSwfit20111208/article/details/135025310