# 14. Testing with Pytest

## Introduction

pytest can be used for all types and levels of software testing. Many projects – amongst them Mozilla and Dropbox - switched from unittest or nose to pytest.

Live Python training

Enjoying this page? We offer live Python training courses covering the content of this site.

Enrol here

## A Simple First Example with Pytest

Test files which pytest will use for testing have to start with test_ or end with _test.py We will demonstrate the way of working by writing a test file test_fibonacci.py for a file fibonacci.py. Both files are in one directory:

The first file is the file which should be tested. We assume that it is saved as fibonacci_p.py:

def fib(n):
old, new = 0, 1
for _ in range(n):
old, new = new, old + new
return old


Now, we have to provide the code for the file test_fibonacci.py. This file will be used by 'pytest':

from fibonacci_p import fib
def test_fib():
assert fib(0) == 0
assert fib(1) == 1
assert fib(10) == 55


We call pytest in a command shell in the directory where the two file shown above reside:

 pytest

The result of this code can be seen in the following:

============================= test session starts ==============================
platform linux -- Python 3.7.1, pytest-4.0.2, py-1.7.0, pluggy-0.8.0
rootdir: /home/bernd/, inifile:
plugins: remotedata-0.3.1, openfiles-0.3.1, doctestplus-0.2.0, arraydiff-0.3
collected 1 item
test_fibonacci.py .                                                      [100%]
=========================== 1 passed in 0.01 seconds ===========================


We create now an erroneous version of fib. We change the two start values from 0 and 1 to the values 2 and 1. This is the beginning of the Lucas sequence, but as we want to implement the Fibonacci sequence this is wrong. This way, we can study how pytest behaves in this case:

def fib(n):
old, new = 2, 1
for _ in range(n):
old, new = new, old + new
return old


Calling 'pytest' with this erroneous implementation of fibonacci gives us the following results:

$pytest ============================= test session starts ============================== platform linux -- Python 3.7.1, pytest-4.0.2, py-1.7.0, pluggy-0.8.0 rootdir: /home/bernd/, inifile: plugins: remotedata-0.3.1, openfiles-0.3.1, doctestplus-0.2.0, arraydiff-0.3 collected 1 item test_fibonacci.py F [100%] =================================== FAILURES =================================== ___________________________________ test_fib ___________________________________ def test_fib(): > assert fib(0) == 0 E assert 2 == 0 E + where 2 = fib(0) test_fibonacci.py:5: AssertionError =========================== 1 failed in 0.03 seconds ===========================  ## Another Pytest Example (1) We will get closer to 'reality' in our next example. In a real life scenario, we will usually have more than one file and for each file we may have a corresponding test file. Each test file may contain various tests. We have various files in our example folder ex2: The files to be tested: • fibonacci.py • foobar_plus.py • foobar.py The test files: • test_fibonacci.py • test_foobar_plus.py • test_foobar.py We start 'pytest' in the directory 'ex2' and get the following results: $ pytest
==================== test session starts ======================
platform linux -- Python 3.7.3, pytest-4.3.1, py-1.8.0, pluggy-0.9.0
rootdir: /home/bernd/ex2, inifile:
plugins: remotedata-0.3.1, openfiles-0.3.2, doctestplus-0.3.0, arraydiff-0.3
collected 4 items
test_fibonacci.py .                                                      [ 25%]
test_foobar.py ..                                                        [ 75%]
test_foobar_plus.py .                                                    [100%]
==================== 4 passed in 0.05 seconds =================


Live Python training

Enjoying this page? We offer live Python training courses covering the content of this site.

Upcoming online Courses

Enrol here

## Another Pytest Example (2)

It is possible to execute only tests, which contain a given substring in their name. The substring is determined by a Python expression This can be achieved with the call option „-k“

 pytest -k

The call

 pytest -k foobar

will only execute the test files having the substring 'foobar' in their name. In this case, they are test_foobar.py and test_foobar_plus.py:

$pytest -k foobar ============================= test session starts ============================== platform linux -- Python 3.7.1, pytest-4.0.2, py-1.7.0, pluggy-0.8.0 rootdir: /home/bernd/ex2, inifile: plugins: remotedata-0.3.1, openfiles-0.3.1, doctestplus-0.2.0, arraydiff-0.3 collected 3 items / 1 deselected test_foobar.py . [ 50%] test_foobar_plus.py . [100%] ==================== 2 passed, 1 deselected in 0.01 seconds ====================  We will select now only the files containing 'plus' and 'fibo'  pytest -k 'plus or fibo'  $ pytest -k 'plus or fibo'
============================= test session starts ==============================
platform linux -- Python 3.7.1, pytest-4.0.2, py-1.7.0, pluggy-0.8.0
rootdir: /home/bernd/ex2, inifile:
plugins: remotedata-0.3.1, openfiles-0.3.1, doctestplus-0.2.0, arraydiff-0.3
collected 3 items / 1 deselected
test_fibonacci.py .                                                      [ 50%]
test_foobar_plus.py .                                                    [100%]
==================== 2 passed, 1 deselected in 0.01 seconds ====================


## Markers in Pytest

Test functions can be marked or tagged by decorating them with 'pytest.mark.'.

Such a marker can be used to select or deselect test functions. You can see the markers which exist for your test suite by typing

 $pytest --markers  $ pytest --markers
@pytest.mark.openfiles_ignore: Indicate that open files should be ignored for this test
@pytest.mark.remote_data: Apply to tests that require data from remote servers

@pytest.mark.internet_off: Apply to tests that should only run when network access is deactivated

@pytest.mark.filterwarnings(warning): add a warning filter to the given test. see Filter warnings

@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.

@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value.  Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. See skipping.

@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See Skipping.

@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2. See Parametrize for more info and examples.

@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. See Fixtures

@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.

@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.

This contains also custom defined markers!

Live Python training

Enjoying this page? We offer live Python training courses covering the content of this site.

Enrol here

## Registering Markers

Since pytest version 4.5 markers have to be registered.They can be registered in the init file pytest.ini, placed in the test directory. We register the markers 'slow' and 'crazy', which we will use in the following example:

[pytest] markers = slow: mark a test as a 'slow' (slowly) running test crazy: stupid function to test :-)

We add a recursive and inefficient version rfib to our fibonacci module and mark the corresponding test routine with slow, besides this rfib is marked with crazy as well:


# content of fibonacci.py
def fib(n):
old, new = 0, 1
for i in range(n):
old, new = new, old + new
return old
def rfib(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return rfib(n-1) + rfib(n-2)


The corresponding test file:

""" content of test_fibonacci.py """
import pytest
from fibonacci import fib, rfib
def test_fib():
assert fib(0) == 0
assert fib(1) == 1
assert fib(34) == 5702887
@pytest.mark.crazy
@pytest.mark.slow
def test_rfib():
assert fib(0) == 0
assert fib(1) == 1
assert rfib(34) == 5702887


Besides this we will add the files foobar.py and test_foobar.py as well. We mark the test functions in test_foobar.py as crazy.

# content of foobar.py
def foo():
return "foo"
def bar():
return "bar"


This is the corresponding test file:

### content of test_foobar.py

import pytest
from foobar import foo, bar

@pytest.mark.crazy
def test_foo():
assert foo() == "foo"

@pytest.mark.crazy
def test_bar():
assert bar() == "bar"


We will start tests now depending on the markers. Let's start all tests, which are not marked as slow:

$pytest -svv -k "slow" ===================================== test session starts ====================================== platform linux -- Python 3.7.1, pytest-4.0.2, py-1.7.0, pluggy-0.8.0 -- /home/bernd/python cachedir: .pytest_cache rootdir: /home/bernd/ex_tagging, inifile: plugins: remotedata-0.3.1, openfiles-0.3.1, doctestplus-0.2.0, arraydiff-0.3 collected 4 items / 3 deselected test_fibonacci.py::test_rfib PASSED ============================ 1 passed, 3 deselected in 7.05 seconds ============================  We will run now only the tests which are not marked as slow or crazy: $ pytest -svv -k "not slow and not crazy"
======================= test session starts =======================
platform linux -- Python 3.7.1, pytest-4.0.2, py-1.7.0, pluggy-0.8.0 -- /home/bernd/
cachedir: .pytest_cache
rootdir: /home/bernd//ex_tagging, inifile:
plugins: remotedata-0.3.1, openfiles-0.3.1, doctestplus-0.2.0, arraydiff-0.3
collected 4 items / 3 deselected
test_fibonacci.py::test_fib PASSED
===================== 1 passed, 3 deselected in 0.01 seconds ====================


## skipif Marker

If you wish to skip a function conditionally then you can use skipif. In the following example the function test_foo is marked with a skipif. The function will not be executed, if the Python version is 3.6.x:

import pytest
import sys
from foobar import foo, bar
@pytest.mark.skipif(
sys.version_info[0] == 3 and sys.version_info[1] == 6,
reason="Python version has to be higher than 3.5!")
def test_foo():
assert foo() == "foo"
@pytest.mark.crazy
def test_bar():
assert bar() == "bar"


Instead of a conditional skip we can also use an unconditional skip. This way, we can always skip. We can add a reason. The following example shows how this can be accomplished by marking the function test_bar with a skip marker. The reason we give is that it is "even fooer than foo":

import pytest
import sys
from foobar import foo, bar
@pytest.mark.skipif(
sys.version_info[0] == 3 and sys.version_info[1] == 6,
reason="Python version has to be higher than 3.5!")
def test_foo():
assert foo() == "foo"
@pytest.mark.skip(reason="Even fooer than foo, so we skip!")
def test_bar():
assert bar() == "bar"


If we call pytest on this code, we get the following output:

$pytest -v ================ test session starts =============== platform linux -- Python 3.6.9, pytest-5.0.1, py-1.8.0, pluggy-0.12.0 -- /home/bernd/python cachedir: .pytest_cache rootdir: /home/bernd/ex_tagging2, inifile: pytest.ini collected 4 items test_fibonacci.py::test_fib PASSED [ 25%] test_fibonacci.py::test_rfib PASSED [ 50%] test_foobar.py::test_foo SKIPPED [ 75%] test_foobar.py::test_bar PASSED [100%] ============= 3 passed, 1 skipped in 0.01 seconds =============  Live Python training Enjoying this page? We offer live Python training courses covering the content of this site. Enrol here ## Parametrization with Markers We will demonstrate parametrization with markers with our Fibonacci function. # content of fibonacci.py def fib(n): old, new = 0, 1 for _ in range(n): old, new = new, old + new return old  We write a pytest test function which will test against this fibonacci function with various values: # content of the file test_fibonacci.py import pytest from fibonacci import fib @pytest.mark.parametrize( 'n, res', [(0, 0), (1, 1), (2, 1), (3, 2), (4, 3), (5, 5), (6, 8)]) def test_fib(n, res): assert fib(n) == res  When we call pytest, we get the following results: $ pytest -v
============================ test session starts ============================
platform linux -- Python 3.6.9, pytest-5.0.1, py-1.8.0, pluggy-0.12.0 -- /home/bernd/python
cachedir: .pytest_cache
rootdir: /home/bernd/ex_parametrization1
collected 7 items
test_fibonacci.py::test_fib[0-0] PASSED              [ 14%]
test_fibonacci.py::test_fib[1-1] PASSED              [ 28%]
test_fibonacci.py::test_fib[2-1] PASSED              [ 42%]
test_fibonacci.py::test_fib[3-2] PASSED              [ 57%]
test_fibonacci.py::test_fib[4-3] PASSED              [ 71%]
test_fibonacci.py::test_fib[5-5] PASSED              [ 85%]
test_fibonacci.py::test_fib[6-8] PASSED              [100%]
========================== 7 passed in 0.01 seconds =========================


The numbers inside the square brackets on front of the word "PASSED" are the values of 'n' and 'res'.

## Prints in Functions

If there are prints in the functions which we test, we will not see this output in our pytests, unless we call pytest with the option "-s". To demonstrate this we will add a print line to our fibonacci function:

def fib(n):
old, new = 0, 1
for _ in range(n):
old, new = new, old + new
print("result: ", old)
return old


Calling "pytest -s -v" will deliver the following output:

=============== test session starts ==============
platform linux -- Python 3.6.9, pytest-5.0.1, py-1.8.0, pluggy-0.12.0 -- /home/bernd/python
cachedir: .pytest_cache
rootdir: /home/bernd/ex_parametrization1
collected 7 items
test_fibonacci.py::test_fib[0-0] result:  0
PASSED
test_fibonacci.py::test_fib[1-1] result:  1
PASSED
test_fibonacci.py::test_fib[2-1] result:  1
PASSED
test_fibonacci.py::test_fib[3-2] result:  2
PASSED
test_fibonacci.py::test_fib[4-3] result:  3
PASSED
test_fibonacci.py::test_fib[5-5] result:  5
PASSED
test_fibonacci.py::test_fib[6-8] result:  8
PASSED
============= 7 passed in 0.01 seconds =================


Live Python training

Enjoying this page? We offer live Python training courses covering the content of this site.

Enrol here

## Command Line Options / Fixtures

We will write a test version for our fibonacci function which depends on command line arguments. We can add custom command line options to pytest with the pytest_addoption hook that allows us to manage the command line parser and the setup for each test. At first, we have to write a file conftest.py with the functions cmdopt and pytest_addoption:


import pytest
action="store",
default="full",
help="'num' of tests or full")
@pytest.fixture
def cmdopt(request):
return request.config.getoption("--cmdopt")


The code for our fibonacci test module looks like. The test_fif function has a parameter 'cmdopt' which gets the parameter option:

from fibonacci import fib
results = [0, 1, 1, 2, 3, 5, 8, 13, 21,
34, 55, 89, 144, 233, 377]
def test_fib(cmdopt):
if cmdopt == "full":
num = len(results)
else:
num = len(results)
if int(cmdopt) < len(results):
num = int(cmdopt)
for i in range(num):
assert fib(i) == results[i] 

We can call it now with various options, as we can see in the following:

$pytest -q --cmdopt=full -v -s ============ test session starts ================ platform linux -- Python 3.6.9, pytest-5.0.1, py-1.8.0, pluggy-0.12.0 rootdir: /home/bernd/Dropbox (Bodenseo)/kurse/python_en/examples/pytest/ex_cmd_line collected 1 item test_fibonacci.py running 15 tests! . ============= 1 passed in 0.01 seconds ============$ pytest -q --cmdopt=6 -v -s
============= test session starts ==============
platform linux -- Python 3.6.9, pytest-5.0.1, py-1.8.0, pluggy-0.12.0
rootdir: /home/bernd/Dropbox (Bodenseo)/kurse/python_en/examples/pytest/ex_cmd_line
collected 1 item
test_fibonacci.py running  6 tests!
.
=========================== 1 passed in 0.01 seconds ================================


Let's put an error in our test results: results = [0, 1, 1, 2, 3, 1001, 8,…] Calling pytest with 'pytest -q --cmdopt=10 -v -s' gives us the following output:

================== test session starts ==================
platform linux -- Python 3.6.9, pytest-5.0.1, py-1.8.0, pluggy-0.12.0 rootdir: /home/bernd/ex_cmd_line collected 1 item test_fibonacci.py running 10 tests! F =============== FAILURES =================== _______________ test_fib ___________________ cmdopt = '10' def test_fib(cmdopt): if cmdopt == "full": num = len(results) else: num = len(results) if int(cmdopt) < len(results): num = int(cmdopt) print(f"running {num:2d} tests!") for i in range(num): > assert fib(i) == results[i] E assert 5 == 1001 E + where 5 = fib(5) test_fibonacci.py:16: AssertionError ================ 1 failed in 0.03 seconds =================


Live Python training

Enjoying this page? We offer live Python training courses covering the content of this site.

Upcoming online Courses

Enrol here