Skip to main content

Bayesian ADAPTive Experimental Design

Project description

BADAPTED: Bayesian ADAPTive Experimental Design

PyPI

Status: Working code, but still under development 🔥

Run efficient Bayesian adaptive experiments.

This code relates to the following pre-print. But, the pre-print is likely to appear in quite a different form when finally published.

Vincent, B. T., & Rainforth, T. (2017, October 20). The DARC Toolbox: automated, flexible, and efficient delayed and risky choice experiments using Bayesian adaptive design. Retrieved from psyarxiv.com/yehjb

Building your own adaptive experiment toolbox on top of badapted

Below we outline how the badapted package can be used to run adaptive experiments. On it's own, this badapted package will not do anything. It also requires a few classes (and probably some helper functions) that a developer must create for their particular experimental paradigm. This forms a 'toolbox' which will allow adaptive experiments to be run in a particular experimental domain.

The best (first) example of this is our DARC Toolbox which allows adaptive experiments for Delayed And Risky Choice tasks.

But below we outline how to go about creating a new 'toolbox' for your experimental domain of interest.

Step 1: define your design space

First we create a pandas dataframe called designs using a function we write to do this. Each column is a design variable. Each row is a particular design.

def build_my_design_space(my_arguments):
    designs = # CREATE PANDAS DATAFRAME OF THE DESIGN SPACE HERE
    return designs

Step 2: define a custom design generator

In order to generate your own design generator that uses Bayesian Adaptive Design (in your experimental domain) then you need to create a class which subclasses badapted.BayesianAdaptiveDesignGenerator. You will also need to implement some methods specific to your experimental domain, notably:

  • add_design_response_to_dataframe
  • df_to_design_tuple

For the moment, we will just provide the example we use in the DARC Toolbox. Firstly, our concrete design generator class is defined as:

from badapted.designs import BayesianAdaptiveDesignGenerator

class BayesianAdaptiveDesignGeneratorDARC(DARCDesignGenerator, BayesianAdaptiveDesignGenerator):
    '''This will be the concrete class for doing Bayesian adaptive design
    in the DARC experiment domain.'''

    def __init__(self, design_space,
                 max_trials=20,
                 allow_repeats=True,
                 penalty_function_option='default',
                 λ=2):

        # call superclass constructors - note that the order of calling these is important
        BayesianAdaptiveDesignGenerator.__init__(self, design_space,
                 max_trials=max_trials,
                 allow_repeats=allow_repeats,
                 penalty_function_option=penalty_function_option,
                 λ=λ)

        DARCDesignGenerator.__init__(self)

Note that this has mulitple inheritance, so we also have a class DARCDesignGenerator which just includes DARC specific methods (add_design_response_to_dataframe, df_to_design_tuple). This is defined as:

from badapted.designs import DesignGeneratorABC
from darc_toolbox import Prospect, Design


class DARCDesignGenerator(DesignGeneratorABC):
    '''This adds DARC specific functionality to the design generator'''

    def __init__(self):
        # super().__init__()
        DesignGeneratorABC.__init__(self)

        # generate empty dataframe
        data_columns = ['RA', 'DA', 'PA', 'RB', 'DB', 'PB', 'R']
        self.data = pd.DataFrame(columns=data_columns)

    def add_design_response_to_dataframe(self, design, response):
        '''
        This method must take in `design` and `reward` from the current trial
        and store this as a new row in self.data which is a pandas data frame.
        '''

        trial_data = {'RA': design.ProspectA.reward,
                    'DA': design.ProspectA.delay,
                    'PA': design.ProspectA.prob,
                    'RB': design.ProspectB.reward,
                    'DB': design.ProspectB.delay,
                    'PB': design.ProspectB.prob,
                    'R': [int(response)]}
        self.data = self.data.append(pd.DataFrame(trial_data))
        # a bit clumsy but...
        self.data['R'] = self.data['R'].astype('int64')
        self.data = self.data.reset_index(drop=True)
        return

    @staticmethod
    def df_to_design_tuple(df):
        '''User must impliment this method. It takes in a design in the form of a
        single row of pandas dataframe, and it must return the chosen design as a
        named tuple.
        Convert 1-row pandas dataframe into named tuple'''
        RA = df.RA.values[0]
        DA = df.DA.values[0]
        PA = df.PA.values[0]
        RB = df.RB.values[0]
        DB = df.DB.values[0]
        PB = df.PB.values[0]
        chosen_design = Design(ProspectA=Prospect(reward=RA, delay=DA, prob=PA),
                            ProspectB=Prospect(reward=RB, delay=DB, prob=PB))
        return chosen_design

We only did this multiple inheritance because we wanted other (non Bayesian Adaptive) design generators which worked in the DARC domain, but did not have any of the Bayesian Adaptive Design components. In most situations just focussing on Bayesian Adaptive Design, you could just define the add_design_response_to_dataframe, df_to_design_tuple classes in your one single concrete design generator class.

Step 3: define a model

You must provide a model class which inherits from Model. You must also provide the following methods:

  • __init__
  • predictive_y

Here is an example of a minimal implimentation of a user-defined model:

from badapted.model import Model
from badapted.choice_functions import CumulativeNormalChoiceFunc, StandardCumulativeNormalChoiceFunc
from scipy.stats import norm, halfnorm, uniform
import numpy as np


class MyCustomModel(Model):
    '''My custom model which does XYZ.'''

    def __init__(self, n_particles,
                 prior={'logk': norm(loc=-4.5, scale=1),
                        'α': halfnorm(loc=0, scale=2)}):
        '''
        INPUTS
        - n_particles (integer).
        - prior (dictionary). The keys provide the parameter name. The values
        must be scipy.stats objects which define the prior distribution for
        this parameter.

        We provide choice functions in `badapted.choice_functions.py`. In this
        example, we define it in the __init__ but it is not necessary to happen
        here.
        '''
        self.n_particles = int(n_particles)
        self.prior = prior
        self.θ_fixed = {'ϵ': 0.01}
        self.choiceFunction = CumulativeNormalChoiceFunc

    def predictive_y(self, θ, data):
        '''
        INPUTS:
        - θ = parameters
        - data =

        OUTPUT:
        - p_chose_B (float) Must return a value between 0-1.
        '''

        # Step 1 - calculate decision variable
        k = np.exp(θ['logk'].values
        VA = data['RA'].values * 1 / (1 + k * data['DA'].values)
        VB = data['RB'].values * 1 / (1 + k * data['DB'].values)
        decision_variable = VB - VA

        # Step 2 - apply choice function
        p_chose_B = self.choiceFunction(decision_variable, θ, self.θ_fixed)
        return p_chose_B

Step 4: build an experiment trial loop

This is pretty straight-forward and there doesn't need to be any major customisation here.

def run_experiment(design_generator, model, max_trials):
    '''Run an adaptive experiment
    INPUTS:
    - design_generator: a class
    '''

    for trial in range(max_trials):
        design = design_generator.get_next_design(model)
        if design is None:
            break
        response = get_response(design)
        design_generator.enter_trial_design_and_response(design, response)
        model.update_beliefs(design_generator.data)

    return model

Note that the response = get_response(design) line is up to you to impliment. What you do here depends on whether you are simulating responses or getting real responses from PsychoPy etc. The run_experiment function is just an example of how the various parts of the code work together. When running actual experiments using PsychoPy, it is best to refer to the demo psychopy files we provide in the DARC Toolbox as examples to see how this is done.

Step 5: setup and run the experiment

designs = build_my_design_space(my_arguments)
design_generator = MyCustomDesignGenerator(designs, max_trials=max_trials)
model = MyCustomModel()

model = run_experiment(design_generator, model, max_trials)

Note that use of the run_experiment function is just a demonstration of the logic of how things fit together. As mentioned, please refer to PsychoPy example experiments in the DARC Toolbox to see how this all comes together in a PsychoPy experiment.

Toolboxes using badapted

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

badapted-0.0.3.tar.gz (22.4 kB view details)

Uploaded Source

Built Distribution

badapted-0.0.3-py3-none-any.whl (22.2 kB view details)

Uploaded Python 3

File details

Details for the file badapted-0.0.3.tar.gz.

File metadata

  • Download URL: badapted-0.0.3.tar.gz
  • Upload date:
  • Size: 22.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.7.3

File hashes

Hashes for badapted-0.0.3.tar.gz
Algorithm Hash digest
SHA256 33a610cd202a4746a387482be894c0d4860865f79781aed902a6b46b83de81a7
MD5 d7eeda8f23820e564be759f380a91a8a
BLAKE2b-256 901579985965f3ce7843615a15b3330bf35a9e76eb6064cd991b037df9807977

See more details on using hashes here.

Provenance

File details

Details for the file badapted-0.0.3-py3-none-any.whl.

File metadata

  • Download URL: badapted-0.0.3-py3-none-any.whl
  • Upload date:
  • Size: 22.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/41.0.1 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.7.3

File hashes

Hashes for badapted-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 4cabe6344fc5d8350418b8bcd8b81bc014b42378c21cc824cb602ceeffaa0b85
MD5 17a22de7ac604a06b6562829f2d69a51
BLAKE2b-256 3761a23de53fbb0e238f3faeec5311748f52295d0b2e29e49fa34a9bd8b9f194

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page