Simulation-based inference benchmark
Project description
Simulation-Based Inference Benchmark
This repository contains a simulation-based inference benchmark framework, sbibm
, which we describe in the associated manuscript "Benchmarking Simulation-based Inference". The benchmark framework includes tasks, reference posteriors, metrics, plotting, and integrations with SBI toolboxes. The framework is designed to be highly extensible and easily used in new research projects: For each benchmark task, prior, simulator, and reference posteriors are exposed, so that sbibm
can be used easily in research code, as we demonstrate below.
In order to emphasize that sbibm
can be used independently of any particular analysis pipeline, we split the code for reproducing the experiments of the manuscript into a seperate repository hosted at github.com/sbi-benchmark/benchmarking_sbi. Besides the pipeline to reproduce the manuscripts' experiments, full results including dataframes for quick comparisons are hosted in that repository.
If you have questions or comments, please do not hesitate to contact us or open an issue. We would be very glad about contributions, e.g., new tasks, novel metrics, or wrappers for other SBI toolboxes.
Installation
Assuming you have a working Python environment, simply install sbibm
via pip
:
$ pip install sbibm
ODE based models (currently SIR and Lotka-Volterra models) use Julia via diffeqtorch
. If you are planning to use these tasks, please additionally follow the installation instructions of diffeqtorch
. If you are not planning to simulate these tasks for now, you can skip this step.
Tasks
You can then see the list of available tasks by calling sbibm.get_available_tasks()
. If we wanted to use, say, the slcp
task, we can load it using sbibm.get_task
, as in:
import sbibm
task = sbibm.get_task("slcp")
Next, we might want to get prior
and simulator
:
prior = task.get_prior()
simulator = task.get_simulator()
If we call prior()
we get a single draw from the prior distribution. num_samples
can be provided as an optional argument. The following would generate 100 samples from the simulator:
thetas = prior(num_samples=100)
xs = simulator(thetas)
xs
is a torch.Tensor
with shape (100, 8)
, since for SLCP the data is eight-dimensional. Note that if required, conversion to and from torch.Tensor
is very easy: Convert to a numpy array using .numpy()
, e.g., xs.numpy()
. For the reverse, use torch.from_numpy()
on a numpy array.
Some algorithms might require evaluating the pdf of the prior distribution, which can be obtained as a torch.Distribution
instance using task.get_prior_dist()
, which exposes log_prob
and sample
methods. The parameters of the prior can be picked up as a dictionary as parameters using task.get_prior_params()
.
For each task, the benchmark contains 10 observations and respective reference posteriors samples. To fetch the first observation and respective reference posterior samples:
observation = task.get_observation(num_observation=1)
reference_samples = task.get_reference_posterior_samples(num_observation=1)
Every tasks has a couple of informative attributes, including:
task.dim_data # dimensionality data, here: 8
task.dim_parameters # dimensionality parameters, here: 5
task.num_observations # number of different observations x_o available, here: 10
task.name # name: slcp
task.name_display # name_display: SLCP
Finally, if you want to have a look at the source code of the task, take a look in sbibm/tasks/slcp/task.py
. If you wanted to implement a new task, we would recommend modelling them after the existing ones. You will see that each task has a private _setup
method that was used to generate the reference posterior samples.
Algorithms
As mentioned in the intro, sbibm
wraps a number of third-party packages to run various algorithms. We found it easiest to give each algorithm the same interface: In general, each algorithm specifies a run
function that gets task
and hyperparameters as arguments, and eventually returns the required num_posterior_samples
. That way, one can simply import the run function of an algorithm, tune it on any given task, and return metrics on the returned samples. Wrappers for external toolboxes implementing algorithms are in the subfolder sbibm/algorithms
. Currently, integrations with sbi
, pyabc
, pyabcranger
, as well as an experimental integration with elfi
are provided.
Metrics
In order to compare algorithms on the benchmarks, a number of different metrics can be computed. Each task comes with reference samples for each observation. Depending on the benchmark, these are either obtained by making use of an analytic solution for the posterior or a customized likelihood-based approach.
A number of metrics can be computed by comparing algorithm samples to reference samples. In order to do so, a number of different two-sample tests can be computed (see sbibm/metrics
). These test follow a simple interface, just requiring to pass samples from reference and algorithm.
For example, in order to compute C2ST:
import torch
from sbibm.metrics.c2st import c2st
from sbibm.algorithms.mcabc import run as run_rej_abc
reference_samples = task.get_reference_posterior_samples(num_observation=1)
algorithm_samples = run_rej_abc(task=task, num_samples=10_000, num_simulation=100_000, num_observation=1)
c2st_accuracy = c2st(reference_samples, algorithm_samples)
For more info, see help(c2st)
.
Experiments
As mentioned above, we host the code for reproducing the experiments of the manuscript in a seperate repository at github.com/sbi-benchmark/benchmarking_sbi. Besides the pipeline to reproduce the manuscripts' experiments, full results including dataframes for quick comparisons are provided.
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file sbibm-1.0.2.tar.gz
.
File metadata
- Download URL: sbibm-1.0.2.tar.gz
- Upload date:
- Size: 18.5 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.6.1 requests/2.25.1 setuptools/49.6.0.post20210108 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.8.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 626447e69100f4fd5a4367aa412e1f490e476e80f2626e12d54f9fed99a5abf3 |
|
MD5 | d4871cea171d29390d2670309640a1f5 |
|
BLAKE2b-256 | bf1e2807db5df6fb3545ad35d3524ed4496e7f6628f7c4bf46c7fee4ee7fe4af |
File details
Details for the file sbibm-1.0.2-py2.py3-none-any.whl
.
File metadata
- Download URL: sbibm-1.0.2-py2.py3-none-any.whl
- Upload date:
- Size: 18.6 MB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.3.0 pkginfo/1.6.1 requests/2.25.1 setuptools/49.6.0.post20210108 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.8.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | fb64132d7f32904978e6dcf0c19aea5ad39cd20d5337a038730f1b53b2780531 |
|
MD5 | eb47421da8d50a3cfccdf4d94e182e1c |
|
BLAKE2b-256 | 6f939027e8709b4faebc4dc588b75e21e4b7515deaf7857e042a0fee3fc9b16b |