The following are 30 code examples for showing how to use unittest.TextTestRunner.These examples are extracted from open source projects. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. Unit Testing is the first level of software testing where the smallest testable parts of a software are tested. This is used to validate that each unit of the software performs as designed. The unittest test framework is python’s xUnit style framework. White Box Testing method is used for Unit testing. A test fixture is used as a baseline for.
pytest
supports running Python unittest
-based tests out of the box.It’s meant for leveraging existing unittest
-based test suitesto use pytest as a test runner and also allow to incrementally adaptthe test suite to take full advantage of pytest’s features.
To run an existing unittest
-style test suite using pytest
, type:
pytest will automatically collect unittest.TestCase
subclasses andtheir test
methods in test_*.py
or *_test.py
files.
Almost all unittest
features are supported:
@unittest.skip
style decorators; Photoshop free full version windows.setUp/tearDown
;setUpClass/tearDownClass
;setUpModule/tearDownModule
;
Up to this point pytest does not have support for the following features:
load_tests protocol;
subtests;
Benefits out of the box¶
By running your test suite with pytest you can make use of several features,in most cases without having to modify existing code:
Obtain more informative tracebacks;
stdout and stderr capturing;
Test selection options using
-k
and-m
flags;Stopping after the first (or N) failures;
–pdb command-line option for debugging on test failures(see note below);
Distribute tests to multiple CPUs using the pytest-xdist plugin;
Use plain assert-statements instead of
self.assert*
functions (unittest2pytest is immensely helpful in this);
pytest features in unittest.TestCase
subclasses¶
The following pytest features work in unittest.TestCase
subclasses:
Marks: skip, skipif, xfail;
Auto-use fixtures;
The following pytest features do not work, and probablynever will due to different design philosophies:
Fixtures (except for
autouse
fixtures, see below);Parametrization;
Custom hooks;
Third party plugins may or may not work well, depending on the plugin and the test suite.
Mixing pytest fixtures into unittest.TestCase
subclasses using marks¶
Running your unittest with pytest
allows you to use itsfixture mechanism with unittest.TestCase
styletests. Assuming you have at least skimmed the pytest fixture features,let’s jump-start into an example that integrates a pytest db_class
fixture, setting up a class-cached database object, and then referenceit from a unittest-style test:
This defines a fixture function db_class
which - if used - iscalled once for each test class and which sets the class-leveldb
attribute to a DummyDB
instance. The fixture functionachieves this by receiving a special request
object which givesaccess to the requesting test context suchas the cls
attribute, denoting the class from which the fixtureis used. This architecture de-couples fixture writing from actual testcode and allows re-use of the fixture by a minimal reference, the fixturename. So let’s write an actual unittest.TestCase
class using ourfixture definition:
The @pytest.mark.usefixtures('db_class')
class-decorator makes sure thatthe pytest fixture function db_class
is called once per class.Due to the deliberately failing assert statements, we can take a look atthe self.db
values in the traceback:
This default pytest traceback shows that the two test methodsshare the same self.db
instance which was our intentionwhen writing the class-scoped fixture function above.
Using autouse fixtures and accessing other fixtures¶
Although it’s usually better to explicitly declare use of fixtures you needfor a given test, you may sometimes want to have fixtures that areautomatically used in a given context. After all, the traditionalstyle of unittest-setup mandates the use of this implicit fixture writingand chances are, you are used to it or like it.
You can flag fixture functions with @pytest.fixture(autouse=True)
and define the fixture function in the context where you want it used.Let’s look at an initdir
fixture which makes all test methods of aTestCase
class execute in a temporary directory with apre-initialized samplefile.ini
. Our initdir
fixture itself usesthe pytest builtin tmpdir fixture to delegate thecreation of a per-test temporary directory:
Due to the autouse
flag the initdir
fixture function will beused for all methods of the class where it is defined. This is ashortcut for using a @pytest.mark.usefixtures('initdir')
markeron the class like in the previous example.
Running this test module …:
… gives us one passed test because the initdir
fixture functionwas executed ahead of the test_method
.
Note
unittest.TestCase
methods cannot directly receive fixturearguments as implementing that is likely to inflicton the ability to run general unittest.TestCase test suites.
The above usefixtures
and autouse
examples should help to mix inpytest fixtures into unittest suites.
You can also gradually move away from subclassing from unittest.TestCase
to plain assertsand then start to benefit from the full pytest feature set step by step.
Note
Due to architectural differences between the two frameworks, setup andteardown for unittest
-based tests is performed during the call
phaseof testing instead of in pytest
’s standard setup
and teardown
stages. This can be important to understand in some situations, particularlywhen reasoning about errors. For example, if a unittest
-based suiteexhibits errors during setup, pytest
will report no errors during itssetup
phase and will instead raise the error during call
.
This is an update to Doug Hellman's excellent PyMOTW article found here:
The code and examples here have been updated by Corey Goldberg to reflect Python 3.3.
further reading:
unittest - Automated testing framework
Python's unittest
module, sometimes referred to as 'PyUnit', is based on the XUnit framework design by Kent Beck and Erich Gamma. The same pattern is repeated in many other languages, including C, Perl, Java, and Smalltalk. The framework implemented by unittest
supports fixtures, test suites, and a test runner to enable automated testing for your code.
Python Unittest Run Multiple Times
Basic Test Structure
Tests, as defined by unittest, have two parts: code to manage test 'fixtures', and the test itself. Individual tests are created by subclassing TestCase
and overriding or adding appropriate methods. For example,
In this case, the SimplisticTest
has a single test()
method, which would fail if True is ever False.
Running Tests
The easiest way to run unittest tests is to include:
at the bottom of each test file, then simply run the script directly from the command line:
This abbreviated output includes the amount of time the tests took, along with a status indicator for each test (the '.' on the first line of output means that a test passed). For more detailed test results, include the -v
option:
Test Outcomes
Tests have 3 possible outcomes:
The test passes.
The test does not pass, and raises an AssertionError
exception.
Python Unittest Test Runner
The test raises an exception other than AssertionError
.
There is no explicit way to cause a test to 'pass', so a test's status depends on the presence (or absence) of an exception.
When a test fails or generates an error, the traceback is included in the output.
In the example above, test_fail()
fails and the traceback shows the line with the failure code. It is up to the person reading the test output to look at the code to figure out the semantic meaning of the failed test, though. To make it easier to understand the nature of a test failure, the assert*()
methods all accept an argument msg
, which can be used to produce a more detailed error message.
Asserting Truth
Most tests assert the truth of some condition. There are a few different ways to write truth-checking tests, depending on the perspective of the test author and the desired outcome of the code being tested. If the code produces a value which can be evaluated as true, the method assertTrue()
should be used. If the code produces a false value, the method assertFalse()
makes more sense.
Assertion Methods
The TestCase
class provides a number of methods to check for and report failures:
Common Assertions
Python Unit Test Example
Method |
---|
assertTrue(x, msg=None) |
assertFalse(x, msg=None) |
assertIsNone(x, msg=None) |
assertIsNotNone(x, msg=None) |
assertEqual(a, b, msg=None) |
assertNotEqual(a, b, msg=None) |
assertIs(a, b, msg=None) |
assertIsNot(a, b, msg=None) |
assertIn(a, b, msg=None) |
assertNotIn(a, b, msg=None) |
assertIsInstance(a, b, msg=None) |
assertNotIsInstance(a, b, msg=None) |
Other Assertions
Method |
---|
assertAlmostEqual(a, b, places=7, msg=None, delta=None) |
assertNotAlmostEqual(a, b, places=7, msg=None, delta=None) |
assertGreater(a, b, msg=None) |
assertGreaterEqual(a, b, msg=None) |
assertLess(a, b, msg=None) |
assertLessEqual(a, b, msg=None) |
assertRegex(text, regexp, msg=None) |
assertNotRegex(text, regexp, msg=None) |
assertCountEqual(a, b, msg=None) |
assertMultiLineEqual(a, b, msg=None) |
assertSequenceEqual(a, b, msg=None) |
assertListEqual(a, b, msg=None) |
assertTupleEqual(a, b, msg=None) |
assertSetEqual(a, b, msg=None) |
assertDictEqual(a, b, msg=None) |
Failure Messages
These assertions are handy, since the values being compared appear in the failure message when a test fails.
And when these tests are run:
All the assert methods above accept a msg
argument that, if specified, is used as the error message on failure.
Testing for Exceptions (and Warnings)
The TestCase
class provides methods to check for expected exceptions:
Method |
---|
assertRaises(exception) |
assertRaisesRegex(exception, regexp) |
assertWarns(warn, fun, *args, **kwds) |
assertWarnsRegex(warn, fun, *args, **kwds) |
As previously mentioned, if a test raises an exception other than AssertionError
it is treated as an error. This is very useful for uncovering mistakes while you are modifying code which has existing test coverage. There are circumstances, however, in which you want the test to verify that some code does produce an exception. For example, if an invalid value is given to an attribute of an object. In such cases, assertRaises()
makes the code more clear than trapping the exception yourself. Compare these two tests:
The results for both are the same, but the second test using assertRaises()
is more succinct.
Test Fixtures
Fixtures are resources needed by a test. For example, if you are writing several tests for the same class, those tests all need an instance of that class to use for testing. Other test fixtures include database connections and temporary files (many people would argue that using external resources makes such tests not 'unit' tests, but they are still tests and still useful). TestCase
includes a special hook to configure and clean up any fixtures needed by your tests. To configure the fixtures, override setUp()
. To clean up, override tearDown()
.
When this sample test is run, you can see the order of execution of the fixture and test methods:
Test Suites
The standard library documentation describes how to organize test suites manually. I generally do not use test suites directly, because I prefer to build the suites automatically (these are automated tests, after all). Automating the construction of test suites is especially useful for large code bases, in which related tests are not all in the same place. Tools such as nose make it easier to manage tests when they are spread over multiple files and directories.