Skip to main content

Manages Docker containers during your integration tests

Project description

https://circleci.com/gh/pytest-docker-compose/pytest-docker-compose/tree/master.svg?style=svg

This package contains a pytest plugin for integrating Docker Compose into your automated integration tests.

Given a path to a docker-compose.yml file, it will automatically build the project at the start of the test run, bring the containers up before each test starts, and tear them down after each test ends.

Dependencies

Make sure you have Docker installed.

This plugin is automatically tested against the following software:

  • Python 3.5 and 3.6.

  • pytest 3, 4 and 5.

Locally I tested it successfully against Python 3.7 as well but 3.7 proved hard to integrate into circleCI so it’s not officially supported.

Installation

Install the plugin using pip:

> pip install pytest-docker-compose

Usage

For performance reasons, the plugin is not enabled by default, so you must activate it manually in the tests that use it:

pytest_plugins = ["docker_compose"]

See Installing and Using Plugins for more information.

To interact with Docker containers in your tests, use the following fixtures:

function_scoped_container_getter

An object that fetches containers of the Docker compose.container.Container objects running during the test. The containers are fetched using function_scoped_container_getter.get('service_name') These containers each have an extra attribute called network_info added to them. This attribute has a list of pytest_docker_compose.NetworkInfo objects.

This information can be used to configure API clients and other objects that will connect to services exposed by the Docker containers in your tests.

NetworkInfo is a container with the following fields:

  • container_port: The port (and usually also protocol name) exposed internally to the container. You can use this value to find the correct port for your test, when the container exposes multiple ports.

  • hostname: The hostname (usually “localhost”) to use when connecting to the service from the host.

  • host_port: The port number to use when connecting to the service from the host.

docker_project

The compose.project.Project object that the containers are built from. This fixture is generally only used internally by the plugin.

To use the following fixtures please read Use wider scoped fixtures.

class_scoped_container_getter

Similar to function_scoped_container_getter just with a wider scope.

module_scoped_container_getter

Similar to function_scoped_container_getter just with a wider scope.

session_scoped_container_getter

Similar to function_scoped_container_getter just with a wider scope.

Waiting for Services to Come Online

The fixtures called 'scope'_scoped_container_getter will wait until every container is up before handing control over to the test.

However, just because a container is up does not mean that the services running on it are ready to accept incoming requests yet!

If your tests need to wait for a particular condition (for example, to wait for an HTTP health check endpoint to send back a 200 response), make sure that your fixtures account for this.

Here’s an example of a fixture called wait_for_api that waits for an HTTP service to come online before a test called test_read_and_write can run.

import pytest
import requests
from urllib.parse import urljoin
from urllib3.util.retry import Retry
from requests.adapters import HTTPAdapter

pytest_plugins = ["docker_compose"]


@pytest.fixture(scope="function")
def wait_for_api(function_scoped_container_getter):
    """Wait for the api from my_api_service to become responsive"""
    request_session = requests.Session()
    retries = Retry(total=5,
                    backoff_factor=0.1,
                    status_forcelist=[500, 502, 503, 504])
    request_session.mount('http://', HTTPAdapter(max_retries=retries))

    service = function_scoped_container_getter.get("my_api_service").network_info[0]
    api_url = "http://%s:%s/" % (service.hostname, service.host_port)
    assert request_session.get(api_url)
    return request_session, api_url


def test_read_and_write(wait_for_api):
    """The Api is now verified good to go and tests can interact with it"""
    request_session, api_url = wait_for_api
    data_string = 'some_data'
    request_session.put('%sitems/2?data_string=%s' % (api_url, data_string))
    item = request_session.get(urljoin(api_url, 'items/2')).json()
    assert item['data'] == data_string
    request_session.delete(urljoin(api_url, 'items/2'))

Use wider scoped fixtures

The function_scoped_container_getter fixture uses “function” scope, meaning that all of the containers are torn down after each individual test.

This is done so that every test gets to run in a “clean” environment. However, this can potentially make a test suite take a very long time to complete.

There are two options to make containers persist beyond a single test. The best way is to use the fixtures that are explicitly scoped to different scopes. There are three additional fixtures for this purpose: class_scoped_container_getter, module_scoped_container_getter and session_scoped_container_getter. Notice that you need to be careful when using these! There are two main caveats to keep in mind:

  1. Manage your scope correctly, using ‘module’ scope and ‘function’ scope in one single file will throw an error! This is because the module scoped fixture will spin up the containers and then the function scoped fixture will try to spin up the containers again. Docker compose does not allow you to spin up containers twice.

  2. Clean up your environment after each test. Because the containers are not restarted their environments can carry the information from previous tests. Therefore you need to be very carefull when designing your tests such that they leave the containers in the same state that it started in or you might run into difficult to understand behaviour.

A second method to make containers persist beyond a single test is to supply the –use-running-containers flag to pytest like so:

pytest --use-running-containers

With this flag, pytest-docker-compose checks that all containers are running during the project creation. If they are not running a warning is given and they are spun up anyways. They are then used for all the tests and NOT TORE DOWN afterwards.

This mode is best used in combination with the ‘–docker-compose-no-build’ flag since the newly build containers won’t be used anyways. like so:

pytest --docker-compose-no-build --use-running-containers

It is off course possible to add these options to pytest.ini.

Notice that for this mode the scoping of the fixtures becomes less important since the containers are fully persistent throughout all tests. I only recommend using this if your network takes excessively long to spin up/tear down. It should really be a last resort and you should probably look into speeding up your network instead of using this.

Running Integration Tests

Use pytest to run your tests as normal:

pytest

By default, this will look for a docker-compose.yml file in the current working directory. You can specify a different file via the --docker-compose option:

pytest --docker-compose=/path/to/docker-compose.yml

Remove volumes after tests

There is another configuration option that will delete the volumes of containers after running.

pytest --docker-compose-remove-volumes

This option will be ignored if the plugin is not used. Again, this option can also be added to the pytest.ini file.

For more examples on how to use this plugin look at the testing suite of this plugin itself! It will give you some examples for configuring pytest.ini and how to use the different fixtures to run docker containers.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytest-docker-compose-3.1.0.tar.gz (12.5 kB view details)

Uploaded Source

Built Distribution

pytest_docker_compose-3.1.0-py3-none-any.whl (11.8 kB view details)

Uploaded Python 3

File details

Details for the file pytest-docker-compose-3.1.0.tar.gz.

File metadata

  • Download URL: pytest-docker-compose-3.1.0.tar.gz
  • Upload date:
  • Size: 12.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.0.1 requests-toolbelt/0.8.0 tqdm/4.35.0 CPython/3.8.0b1

File hashes

Hashes for pytest-docker-compose-3.1.0.tar.gz
Algorithm Hash digest
SHA256 38afc4b849de2e9d80f5cf22408d6fe0fd477a8bc81a883b969c949946388860
MD5 d43f6ae0c92b8dad31f29033ffafd10e
BLAKE2b-256 7e4f0fb304f8494ee0e7e079e7422b6dd29b396ce47bfd07730f57d92230a10c

See more details on using hashes here.

File details

Details for the file pytest_docker_compose-3.1.0-py3-none-any.whl.

File metadata

  • Download URL: pytest_docker_compose-3.1.0-py3-none-any.whl
  • Upload date:
  • Size: 11.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.0.1 requests-toolbelt/0.8.0 tqdm/4.35.0 CPython/3.8.0b1

File hashes

Hashes for pytest_docker_compose-3.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d4819b105617b5dff9db5b154a3ee7ce6926e7905d38e03314f4b5d1c4e91de4
MD5 1d1af971c65f6602c45721e3fda2f51b
BLAKE2b-256 70211a3beaaad9275e165805ff0b65ff8fdd957bccb8ab2cc67c68c4535b0c57

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page