Skip to main content

design and steer profile likelihood fits

Project description

cabinetry logo

CI status Documentation Status codecov PyPI version python version Code style: black

Table of contents

Introduction

cabinetry is a Python package to build and steer (profile likelihood) template fits with applications in high energy physics in mind. It acts as an interface to many powerful tools to make it easier for an analyzer to run their statistical inference pipeline. An incomplete list of interesting tools to interface:

The project is a work in progress. Configuration of cabinetry happens in a declarative manner, and is easily serializable via JSON/YAML into a configuration file.

Interesting related projects:

Hello world

To run the following example, first generate the input files via the script util/create_ntuples.py.

import cabinetry

config = cabinetry.configuration.load("config_example.yml")

# create template histograms
cabinetry.template_builder.create_histograms(config)

# perform histogram post-processing
cabinetry.template_postprocessor.run(config)

# build a workspace
ws = cabinetry.workspace.build(config)

# run a fit
model, data = cabinetry.model_utils.model_and_data(ws)
fit_results = cabinetry.fit.fit(model, data)

# visualize the post-fit model prediction and data
cabinetry.visualize.data_MC(model, data, config=config, fit_results=fit_results)

The above is an abbreviated version of an example included in example.py, which shows how to use cabinetry. It requires additional libraries beyond the core dependencies of cabinetry, which can be installed via pip install cabinetry[contrib] (or pip install -e .[contrib] from the repository). Eventually the basic implementation (from cabinetry/contrib) will be replaced by calls to external modules (see also Code).

Template fits

The operations needed in a template fit workflow can be summarized as follows:

  1. Template histogram production,
  2. Histogram adjustments,
  3. Workspace creation from histograms,
  4. Inference from workspace,
  5. Visualization.

While the first four points need to happen in this order (as each step uses as input the output of the previous step), the visualization is relevant at all stages to not only show final results, but also intermediate steps.

1. Template histogram production

The production of a template histogram requires the following information:

  • where to find the data (and how to read it),
  • what kind of selection requirements (filtering) and weights to apply to the data,
  • the variable to bin in, and what bins to use (for binned fits)
  • a unique name (key) for this histogram to be able to refer to it later on.

In practice, histogram information can be given by specifying lists of:

  • regions of phase space (or channels, independent regions obtained via different selection requirements),
  • samples (physics processes),
  • systematic uncertainties for the samples, which might vary across samples and phase space regions.

For LHC-style template profile likelihood fits, typically a few thousand histograms are needed. An analysis that considers 5 different phase space regions, with 10 different physics processes (simulated as 10 independent Monte Carlo samples), and an average of 50 systematic uncertainties for all the samples (implemented by specifying variations from the nominal configuration in two directions), needs 5x10x100=5000 histograms.

2. Histogram adjustments

Histogram post-processing can include re-binning, smoothing, or symmetrization of systematic uncertainties. These operations should be handled by tools outside of cabinetry. Such tools might either need some additional steering via an additional configuration, or the basic configuration file has to support arbitrary settings to be passed to these tools (depending on what each tool can interpret).

3. Workspace creation from histograms

Taking the example of pyhf, the workspace creation consists of plugging histograms into the right places in a JSON file. This can be relatively straightforward if the configuration file is very explicit about these assignments. In practice, it is desirable to support convenience options in the configuration file. An example is the ability to de-correlate the effect of a systematic uncertainty across different phase space regions via a simple flag. This means that instead of one nuisance parameter, many nuisance parameters need to be created automatically. The treatment can become complicated when many such convenience functions interact with each other.

A possible approach is to define a lowest level configuration file format that supports no convenience functions at all and everything specified in a very explicit manner. Convenience functions could be supported in small frameworks that can read configuration files containing flags for convenience functions, and those small frameworks could convert the configuration file into the low level format.

The basic task of building the workspace should have well-defined inputs (low-level configuration file) and outputs (such as HistFactory workspaces). Support of convenience functions can be factored out, with a well-defined output (low-level configuration file) and input given by an enhanced configuration file format.

4. Inference from workspace

Inference happens via fits of the workspace, to obtain best-fit results and uncertainties, limits on parameters, significances of observations and so on. External tools are called to perform inference, configured as specified by the configuration file.

5. Visualization

Some information of relevant kinds of visualization is provided in as-user-facing/fit-visualization.md and the links therein.

Scope

For now, cabinetry is focused on HistFactory style template fit models. Those traditional binned template fits are substantially easier to support than the open world of binned and unbinned models. Likelihood-free inference approaches in the style of MadMiner have a more well-defined scope than the open world of RooFit, and might be easier to integrate.

Code

Everything in cabinetry/contrib are basic implementation of tasks that should be done by other tools, and interfaces to those tools should be added. The basic implementations that exist there help with API design.

Acknowledgements

NSF-1836650

This work was supported by the U.S. National Science Foundation (NSF) cooperative agreement OAC-1836650 (IRIS-HEP).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cabinetry-0.2.2.tar.gz (168.0 kB view details)

Uploaded Source

Built Distribution

cabinetry-0.2.2-py3-none-any.whl (59.1 kB view details)

Uploaded Python 3

File details

Details for the file cabinetry-0.2.2.tar.gz.

File metadata

  • Download URL: cabinetry-0.2.2.tar.gz
  • Upload date:
  • Size: 168.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.8.10

File hashes

Hashes for cabinetry-0.2.2.tar.gz
Algorithm Hash digest
SHA256 fac7fe6ff148ad718974a347c92392a1eae17fc21bfc84c87f0a1f2bf743d736
MD5 da41927f4be5c3a6690e4dc826ffb936
BLAKE2b-256 5487359e2a74f1ce1e1108c54205f3a08c69bb43b284be7a3b6a746f15bbfb21

See more details on using hashes here.

File details

Details for the file cabinetry-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: cabinetry-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 59.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.1 importlib_metadata/4.6.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.1 CPython/3.8.10

File hashes

Hashes for cabinetry-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 773b35df0dccbd10233b8a233eadb546c4a7302383bb30879c5b8f1055ce2dcc
MD5 cf09838f9ac4b37d9507fbcc878645f1
BLAKE2b-256 77b87b3cfd0ac49c9467a691abe41aede160c7352df7da25d2612adc951b6e1d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page