Skip to main content

A/B testing for Wagtail

Project description

wagtail-experiments.png

Wagtail Experiments

A/B testing for Wagtail

This module supports the creation of A/B testing experiments within a Wagtail site. Several alternative versions of a page are set up, and on visiting a designated control page, a user is presented with one of those variations, selected at random (using a simplified version of the PlanOut algorithm). The number of visitors receiving each variation is logged, along with the number that subsequently go on to complete the experiment by visiting a designated goal page.

Installation

wagtail-experiments is compatible with Wagtail 5.2 to 6.0, and Django 4.2 to 5.0. It depends on the Wagtail ModelAdmin module, which is available as an external package as of Wagtail 5.0; we recommend using this rather than the bundled wagtail.contrib.modeladmin module to avoid deprecation warnings. The external package is required as of Wagtail 6.0.

To install:

pip install wagtail-experiments wagtail-modeladmin

and ensure that the apps wagtail_modeladmin and experiments are included in your project’s INSTALLED_APPS:

INSTALLED_APPS = [
    # ...
    'wagtail_modeladmin',
    'experiments',
    # ...
]

Then migrate:

./manage.py migrate

Usage

After installation, a new ‘Experiments’ item is added to the Wagtail admin menu under Settings. This is available to superusers and any other users with add/edit permissions on experiments. An experiment is created by specifying a control page and any number of alternative versions of that page, along with an optional goal page. Initially the experiment is in the ‘draft’ status and does not take effect on the site front-end; to begin the experiment, change the status to ‘live’.

When the experiment is live, a user visiting the URL of the control page will be randomly assigned to a test group, to be served either the control page or one of the alternative variations. This assignment persists for the user’s session (according to Django’s session configuration) so that each user receives the same variation each time. When a user subsequently visits the goal page, they are considered to have completed the experiment and a completion is logged against that user’s test group. The completion rate over time for each test group can then be viewed through the admin interface, under ‘View report’.

https://i.imgur.com/tG7JH13.png

From the report page, an administrator can select a winning variation; the experiment status is then changed to ‘completed’, and all visitors to the control page are served the chosen variation.

Typically, the alternative versions of the page will be left unpublished, as this prevents them from appearing as duplicate copies of the control page in the site navigation. If an unpublished page is selected as an alternative, the page revision shown to users on the front-end will be the draft revision that existed at the moment the experiment status was set to ‘live’. When displaying an alternative variation, the title and tree location are overridden to appear as the control page’s title and location; this means that the title of the alternative page can be set to something descriptive, such as “Signup page (blue text)”, without this text ‘leaking’ to site visitors.

Direct URLs for goal completion

If you want goal completion to be linked to some action other than visiting a designated Wagtail page - for example, clicking a ‘follow us on Twitter’ link - you can set up a Javascript action that sends a request to a URL such as /experiments/complete/twitter-follow/ , where twitter-follow is the experiment slug. To set this URL route up, add the following to your URLconf:

from experiments import views as experiment_views

urlpatterns = [
    # ...

    url(r'^experiments/complete/([^\/]+)/$', experiment_views.record_completion),

    # ...
]

Alternative backends

wagtail-experiments supports pluggable backends for tracking participants and completions. The default backend, experiments.backends.db, records these in a database table, aggregated by day. Alternative backends can be specified through the WAGTAIL_EXPERIMENTS_BACKEND setting:

WAGTAIL_EXPERIMENTS_BACKEND = 'mypackage.backends.thecloud'

A backend is a Python module that provides the following functions:

record_participant(experiment, user_id, variation, request):

Called when a user visits the control page for experiment. user_id is the persistent user ID assigned to that visitor; variation is the Page object for the variation to be served; and request is the user’s current request.

record_completion(experiment, user_id, variation, request):

Called when a visitor completes the experiment, either by visiting the goal page or triggering the record_completion. user_id is the persistent user ID assigned to that visitor; variation is the Page object for the variation that was originally served to that user; and request is the user’s current request.

get_report(experiment):

Returns report data for experiment, consisting of a dict containing:

variations

A list of records, one for each variation (including the control page). Each record is a dict containing:

variation_pk

The primary key of the Page object

is_control

A boolean indicating whether this is the control page

is_winner

A boolean indicating whether this variation has been chosen as the winner

total_participant_count

The number of visitors who have been assigned this variation

total_completion_count

The number of visitors assigned this variation who have gone on to complete the experiment

history

A list of dicts showing the breakdown of participants and completions over time; each dict contains date, participant_count and completion_count.

Test data

wagtail-experiments provides a management command experiment-data, to allow populating an experiment with dummy data for testing or demonstration purposes, and purging existing data. This command is called with the experiment’s slug:

# Populate the experiment 'homepage-banner' with 5 days of test data,
# with 100-200 views per variation. All parameters other than experiment slug
# are optional
./manage.py experiment-data homepage-banner --days 5 --min=100 --max=200

# Purge data for the experiment 'homepage-banner'
./manage.py experiment-data homepage-banner --purge

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

wagtail-experiments-0.4.tar.gz (120.5 kB view details)

Uploaded Source

Built Distribution

wagtail_experiments-0.4-py3-none-any.whl (124.0 kB view details)

Uploaded Python 3

File details

Details for the file wagtail-experiments-0.4.tar.gz.

File metadata

  • Download URL: wagtail-experiments-0.4.tar.gz
  • Upload date:
  • Size: 120.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.4

File hashes

Hashes for wagtail-experiments-0.4.tar.gz
Algorithm Hash digest
SHA256 32adc3e85685ef44e1760806208b426bcc722646f3624ff72cf81c662f78f93a
MD5 e87425b24e1245a1cd9e5c117e2ee719
BLAKE2b-256 b142d9588872b824913fa1850736c47178c6dbd0d40e0efbf9fa4a4a0f55f112

See more details on using hashes here.

Provenance

File details

Details for the file wagtail_experiments-0.4-py3-none-any.whl.

File metadata

File hashes

Hashes for wagtail_experiments-0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 66ac71dee90c601446f1f7260e4d4d5c2863bdccf89bb92eda5400b18087e032
MD5 1ecf912b45727072aa84e8a819985064
BLAKE2b-256 048f6108175c121165fb920367e4b830c9b340870527be47c86a84a9af14bbfc

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page