Skip to main content

A pytest plugin that allows multiple failures per test.

Project description

pytest-check

A pytest plugin that allows multiple failures per test.


Normally, a test function will fail and stop running with the first failed assert. That's totally fine for tons of kinds of software tests. However, there are times where you'd like to check more than one thing, and you'd really like to know the results of each check, even if one of them fails.

pytest-check allows multiple failed "checks" per test function, so you can see the whole picture of what's going wrong.

Installation

From PyPI:

$ pip install pytest-check

From conda (conda-forge):

$ conda install -c conda-forge pytest-check

Example

Quick example of where you might want multiple checks:

import httpx
from pytest_check import check

def test_httpx_get():
    r = httpx.get('https://www.example.org/')
    # bail if bad status code
    assert r.status_code == 200
    # but if we get to here
    # then check everything else without stopping
    with check:
        assert r.is_redirect is False
    with check:
        assert r.encoding == 'utf-8'
    with check:
        assert 'Example Domain' in r.text

Import vs fixture

The example above used import: from pytest_check import check.

You can also grab check as a fixture with no import:

def test_httpx_get(check):
    r = httpx.get('https://www.example.org/')
    ...
    with check:
        assert r.is_redirect == False
    ...

Validation functions

check also helper functions for common checks. These methods do NOT need to be inside of a with check: block.

  • check.equal - a == b
  • check.not_equal - a != b
  • check.is_ - a is b
  • check.is_not - a is not b
  • check.is_true - bool(x) is True
  • check.is_false - bool(x) is False
  • check.is_none - x is None
  • check.is_not_none - x is not None
  • check.is_in - a in b
  • check.is_not_in - a not in b
  • check.is_instance - isinstance(a, b)
  • check.is_not_instance - not isinstance(a, b)
  • check.almost_equal - a == pytest.approx(b, rel, abs) see at: pytest.approx
  • check.not_almost_equal - a != pytest.approx(b, rel, abs) see at: pytest.approx
  • check.greater - a > b
  • check.greater_equal - a >= b
  • check.less - a < b
  • check.less_equal - a <= b
  • check.between - a < b < c
  • check.raises - func raises given exception similar to pytest.raises

The httpx example can be rewritten with helper functions:

def test_httpx_get_with_helpers():
    r = httpx.get('https://www.example.org/')
    assert r.status_code == 200
    check.is_false(r.is_redirect)
    check.equal(r.encoding, 'utf-8')
    check.is_in('Example Domain', r.text)

Which you use is personal preference.

Defining your own check functions

The @check.check_func decorator allows you to wrap any test helper that has an assert statement in it to be a non-blocking assert function.

from pytest_check import check

@check.check_func
def is_four(a):
    assert a == 4

def test_all_four():
    is_four(1)
    is_four(2)
    is_four(3)
    is_four(4)

Using raises as a context manager

raises is used as context manager, much like pytest.raises. The main difference being that a failure to raise the right exception won't stop the execution of the test method.

from pytest_check import check

def test_raises():
    with check.raises(AssertionError):
        x = 3
        assert 1 < x < 4

Pseudo-tracebacks

With check, tests can have multiple failures per test. This would possibly make for extensive output if we include the full traceback for every failure. To make the output a little more concise, pytest-check implements a shorter version, which we call pseudo-tracebacks.

For example, take this test:

def test_example():
    a = 1
    b = 2
    c = [2, 4, 6]
    check.greater(a, b)
    check.less_equal(b, a)
    check.is_in(a, c, "Is 1 in the list")
    check.is_not_in(b, c, "make sure 2 isn't in list")

This will result in:

=================================== FAILURES ===================================
_________________________________ test_example _________________________________
FAILURE:
assert 1 > 2
  test_check.py, line 14, in test_example() -> check.greater(a, b)
FAILURE:
assert 2 <= 1
  test_check.py, line 15, in test_example() -> check.less_equal(b, a)
FAILURE: Is 1 in the list
assert 1 in [2, 4, 6]
  test_check.py, line 16, in test_example() -> check.is_in(a, c, "Is 1 in the list")
FAILURE: make sure 2 isn't in list
assert 2 not in [2, 4, 6]
  test_check.py, line 17, in test_example() -> check.is_not_in(b, c, "make sure 2 isn't in list")
------------------------------------------------------------
Failed Checks: 4
=========================== 1 failed in 0.11 seconds ===========================

Red output

The failures will also be red, unless you turn that off with pytests --color=no.

No output

You can turn off the failure reports with pytests --tb=no.

Stop on Fail (maxfail behavior)

Setting -x or --maxfail=1 will cause this plugin to abort testing after the first failed check.

Setting -maxfail=2 or greater will turn off any handling of maxfail within this plugin and the behavior is controlled by pytest.

In other words, the maxfail count is counting tests, not checks. The exception is the case of 1, where we want to stop on the very first failed check.

any_failures()

Use any_failures() to see if there are any failures.
One use case is to make a block of checks conditional on not failing in a previous set of checks:

from pytest_check import check

def test_with_groups_of_checks():
    # always check these
    check.equal(1, 1)
    check.equal(2, 3)
    if not check.any_failures():
        # only check these if the above passed
        check.equal(1, 2)
        check.equal(2, 2)

Speedups

If you have lots of check failures, your tests may not run as fast as you want. There are a few ways to speed things up.

  • --check-max-tb=5 - Only first 5 failures per test will include pseudo-tracebacks (rest without them).

    • The example shows 5 but any number can be used.
    • pytest-check uses custom traceback code I'm calling a pseudo-traceback.
    • This is visually shorter than normal assert tracebacks.
    • Internally, it uses introspection, which can be slow.
    • Allowing a limited number of pseudo-tracebacks speeds things up quite a bit.
    • Default is 1.
      • Set a large number, e.g: 1000, if you want pseudo-tracebacks for all failures
  • --check-max-report=10 - limit reported failures per test.

    • The example shows 10 but any number can be used.
    • The test will still have the total nuber of failures reported.
    • Default is no maximum.
  • --check-max-fail=20 - Stop the test after this many check failures.

    • This is useful if your code under test is slow-ish and you want to bail early.
    • Default is no maximum.
  • Any of these can be used on their own, or combined.

  • Recommendation:

    • Leave the default, equivelant to --check-max-tb=1.
    • If excessive output is annoying, set --check-max-report=10 or some tolerable number.

Local speedups

The flags above are global settings, and apply to every test in the test run.

Locally, you can set these values per test.

From examples/test_example_speedup_funcs.py:

def test_max_tb():
    check.set_max_tb(2)
    for i in range(1, 11):
        check.equal(i, 100)

def test_max_report():
    check.set_max_report(5)
    for i in range(1, 11):
        check.equal(i, 100)

def test_max_fail():
    check.set_max_fail(5)
    for i in range(1, 11):
        check.equal(i, 100)

Contributing

Contributions are very welcome. Tests can be run with tox. Test coverage is now 100%. Please make sure to keep it at 100%. If you have an awesome pull request and need help with getting coverage back up, let me know.

License

Distributed under the terms of the MIT license, "pytest-check" is free and open source software

Issues

If you encounter any problems, please file an issue along with a detailed description.

Changelog

See changelog.md

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytest_check-2.2.3.tar.gz (26.8 kB view details)

Uploaded Source

Built Distribution

pytest_check-2.2.3-py3-none-any.whl (12.8 kB view details)

Uploaded Python 3

File details

Details for the file pytest_check-2.2.3.tar.gz.

File metadata

  • Download URL: pytest_check-2.2.3.tar.gz
  • Upload date:
  • Size: 26.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for pytest_check-2.2.3.tar.gz
Algorithm Hash digest
SHA256 6df03264b6bbcf25cd8615120cda03b4e6d11fdb2b7c8dde6a9c8fef18a7495c
MD5 efe9fba2b3fe53e851f29bf5bbb76af6
BLAKE2b-256 7e0106d14d0ac4e23e6df6914b1b32a11c2949e2aed3cfd957537d2a55d58de0

See more details on using hashes here.

File details

Details for the file pytest_check-2.2.3-py3-none-any.whl.

File metadata

  • Download URL: pytest_check-2.2.3-py3-none-any.whl
  • Upload date:
  • Size: 12.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.7

File hashes

Hashes for pytest_check-2.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 d2643066fb805ca380b9251ef641a22dddc339018179a1e5f0b4626a5ff7e560
MD5 c1fdf8471834ca6c0dd83a79d7dd15ef
BLAKE2b-256 372be7964630a6b49605d65ccb072d49d6e1319f09fa84d6f2b67fc6a77438be

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page