Skip to main content

Extends Hypothesis to add fully automatic testing of type annotated functions

Project description

hypothesis-auto - Fully Automatic Tests for Type Annotated Functions Using Hypothesis.


PyPI version Build Status codecov Join the chat at https://gitter.im/timothycrosley/hypothesis-auto License Downloads


Read Latest Documentation - Browse GitHub Code Repository


hypothesis-auto is an extension for the Hypothesis project that enables fully automatic tests for type annotated functions.

Hypothesis Pytest Auto Example

Key Features:

  • Type Annotation Powered: Utilize your function's existing type annotations to build dozens of test cases automatically.
  • Low Barrier: Start utilizing property-based testing in the lowest barrier way possible. Just run auto_test(FUNCTION) to run dozens of test.
  • pytest Compatible: Like Hypothesis itself, hypothesis-auto has built-in compatibility with the popular pytest testing framework. This means that you can turn your automatically generated tests into individual pytest test cases with one line.
  • Scales Up: As you find your self needing to customize your auto_test cases, you can easily utilize all the features of Hypothesis, including custom strategies per a parameter.

Installation:

To get started - install hypothesis-auto into your projects virtual environment:

pip3 install hypothesis-auto

OR

poetry add hypothesis-auto

OR

pipenv install hypothesis-auto

Usage Examples:

!!! warning In old usage examples you will see _ prefixed parameters like _auto_verify=. This was done to avoid conflicting with existing function parameters. Based on community feedback the project switched to _ suffixes, such as auto_verify_= to keep the likely hood of conflicting low while avoiding the connotation of private parameters.

Framework independent usage

Basic auto_test usage:

from hypothesis_auto import auto_test


def add(number_1: int, number_2: int = 1) -> int:
    return number_1 + number_2


auto_test(add)  # 50 property based scenarios are generated and ran against add
auto_test(add, auto_runs_=1_000)  # Let's make that 1,000

Adding an allowed exception:

from hypothesis_auto import auto_test


def divide(number_1: int, number_2: int) -> int:
    return number_1 / number_2

auto_test(divide)

-> 1012                     raise the_error_hypothesis_found
   1013
   1014         for attrib in dir(test):

<ipython-input-2-65a3aa66e9f9> in divide(number_1, number_2)
      1 def divide(number_1: int, number_2: int) -> int:
----> 2     return number_1 / number_2
      3

0/0

ZeroDivisionError: division by zero


auto_test(divide, auto_allow_exceptions_=(ZeroDivisionError, ))

Using auto_test with a custom verification method:

from hypothesis_auto import Scenario, auto_test


def add(number_1: int, number_2: int = 1) -> int:
    return number_1 + number_2


def my_custom_verifier(scenario: Scenario):
    if scenario.kwargs["number_1"] > 0 and scenario.kwargs["number_2"] > 0:
        assert scenario.result > scenario.kwargs["number_1"]
        assert scenario.result > scenario.kwargs["number_2"]
    elif scenario.kwargs["number_1"] < 0 and scenario.kwargs["number_2"] < 0:
        assert scenario.result < scenario.kwargs["number_1"]
        assert scenario.result < scenario.kwargs["number_2"]
    else:
        assert scenario.result >= min(scenario.kwargs.values())
        assert scenario.result <= max(scenario.kwargs.values())


auto_test(add, auto_verify_=my_custom_verifier)

Custom verification methods should take a single Scenario and raise an exception to signify errors.

For the full set of parameters, you can pass into auto_test see its API reference documentation.

pytest usage

Using auto_pytest_magic to auto-generate dozens of pytest test cases:

from hypothesis_auto import auto_pytest_magic


def add(number_1: int, number_2: int = 1) -> int:
    return number_1 + number_2


auto_pytest_magic(add)

Using auto_pytest to run dozens of test case within a temporary directory:

from hypothesis_auto import auto_pytest


def add(number_1: int, number_2: int = 1) -> int:
    return number_1 + number_2


@auto_pytest()
def test_add(test_case, tmpdir):
    tmpdir.mkdir().chdir()
    test_case()

Using auto_pytest_magic with a custom verification method:

from hypothesis_auto import Scenario, auto_pytest


def add(number_1: int, number_2: int = 1) -> int:
    return number_1 + number_2


def my_custom_verifier(scenario: Scenario):
    if scenario.kwargs["number_1"] > 0 and scenario.kwargs["number_2"] > 0:
        assert scenario.result > scenario.kwargs["number_1"]
        assert scenario.result > scenario.kwargs["number_2"]
    elif scenario.kwargs["number_1"] < 0 and scenario.kwargs["number_2"] < 0:
        assert scenario.result < scenario.kwargs["number_1"]
        assert scenario.result < scenario.kwargs["number_2"]
    else:
        assert scenario.result >= min(scenario.kwargs.values())
        assert scenario.result <= max(scenario.kwargs.values())


auto_pytest_magic(add, auto_verify_=my_custom_verifier)

Custom verification methods should take a single Scenario and raise an exception to signify errors.

For the full reference of the pytest integration API see the API reference documentation.

Why Create hypothesis-auto?

I wanted a no/low resistance way to start incorporating property-based tests across my projects. Such a solution that also encouraged the use of type hints was a win/win for me.

I hope you too find hypothesis-auto useful!

~Timothy Crosley

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hypothesis_auto-1.1.5.tar.gz (9.4 kB view details)

Uploaded Source

Built Distribution

hypothesis_auto-1.1.5-py3-none-any.whl (7.9 kB view details)

Uploaded Python 3

File details

Details for the file hypothesis_auto-1.1.5.tar.gz.

File metadata

  • Download URL: hypothesis_auto-1.1.5.tar.gz
  • Upload date:
  • Size: 9.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.2.2 CPython/3.10.6 Linux/6.0.12-76060006-generic

File hashes

Hashes for hypothesis_auto-1.1.5.tar.gz
Algorithm Hash digest
SHA256 534bdc381f635e6515e6fd88c326d5bb66ab6351693e72a43fb9b691b0f5911c
MD5 ed3688878c30ebc727cae3e668d8a00f
BLAKE2b-256 149da491aaf55e61b4fee99508d8d2ede409c760ced3e0156d78bcbd34a8e4df

See more details on using hashes here.

File details

Details for the file hypothesis_auto-1.1.5-py3-none-any.whl.

File metadata

  • Download URL: hypothesis_auto-1.1.5-py3-none-any.whl
  • Upload date:
  • Size: 7.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.2.2 CPython/3.10.6 Linux/6.0.12-76060006-generic

File hashes

Hashes for hypothesis_auto-1.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 7e962789a9ac691d4d9b6996f06f7afa1708013b67fd2df93a8857a92d6d7854
MD5 9715b08370b7437091605e6c0f2c15a7
BLAKE2b-256 7bce70744537742b0a6fd155a9b1e6edcfd824a687c2234a1d78c09dd473acb1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page