design and steer profile likelihood fits
Project description
cabinetry
Table of contents
Introduction
cabinetry
is a tool to build and steer (profile likelihood) template fits with applications in high energy physics in mind.
It acts as an interface to many powerful tools to make it easier for an analyzer to run their statistical inference pipeline.
An incomplete list of interesting tools to interface:
- ServiceX for data delivery,
- coffea for histogram processing,
- uproot for reading ROOT files
- for building likelihood functions (captured in so-called workspaces in RooFit) and inference:
- RooFit to model probability distributions
- RooStats for statistical tools
- HistFactory to implement a subset of binned template fits
- pyhf for a pythonic take on HistFactory,
- zfit for a pythonic take on RooFit
- MadMiner for likelihood-free inference techniques
The project is a work in progress.
Configuration fo cabinetry
should happen in a declarative manner, and be easily serializable via JSON/YAML into a configuration file.
Some of the discussion below needs generalization for MadMiner style applications, see also the Scope section.
Interesting related projects:
Hello world
To run the following example, first generate the input files via the script util/create_histograms.py
.
import cabinetry
cabinetry_config = cabinetry.configuration.read("config_example.yml")
# create template histograms
histo_folder = "histograms/"
cabinetry.template_builder.create_histograms(
cabinetry_config, histo_folder, method="uproot"
)
# perform histogram post-processing
cabinetry.template_postprocessor.run(cabinetry_config, histo_folder)
# visualize templates and data
cabinetry.visualize.data_MC(cabinetry_config, histo_folder, "figures/", prefit=True, method="matplotlib")
# build a workspace
ws = cabinetry.workspace.build(cabinetry_config, histo_folder)
# run a fit
cabinetry.fit.fit(ws)
The above is an abbreviated version of an example included in example.py
, which shows how to use cabinetry
.
Beyond the core dependencies of cabinetry
(currently pyyaml
, numpy
, pyhf
, iminuit
), it also requires additional libraries: uproot
, scipy
, matplotlib
, numexpr
.
Those additional dependencies are not installed together with cabinetry
, as they are only used to perform tasks that are outside the cabinetry
core functionality.
Eventually the basic implementation (from cabinetry/contrib
) will be replaced by calls to external modules (see also Code).
Template fits
The operations needed in a template fit workflow can be summarized as follows:
- Template histogram production,
- Histogram adjustments,
- Workspace creation from histograms,
- Inference from workspace,
- Visualization.
While the first four points need to happen in this order (as each step uses as input the output of the previous step), the visualization is relevant at all stages to not only show final results, but also intermediate steps.
1. Template histogram production
The production of a template histogram requires the following information:
- where to find the data (and how to read it),
- what kind of selection requirements (filtering) and weights to apply to the data,
- the variable to bin in, and what bins to use (for binned fits)
- a unique name (key) for this histogram to be able to refer to it later on.
In practice, histogram information can be given by specifying lists of:
- samples (physics processes),
- regions of phase space (or channels, independent regions obtained via different selection requirements),
- systematic uncertainties for the samples, which might vary across samples and phase space regions.
For LHC-style template profile likelihood fits, typically a few thousand histograms are needed.
An analysis that considers 10 different physics processes (simulated as 10 independent Monte Carlo samples, uses 5 different phase space regions, and an average of 50 systematic uncertainties for all the samples (implemented by specifying variations from the nominal configuration in two directions) needs 10x5x100=5000
systematics.
2. Histogram adjustments
Histogram post-processing can include re-binning, smoothing, or symmetrization of systematic uncertainties.
These operations should be handled by tools outside of cabinetry
.
Such tools might either need some additional steering via an additional configuration, or the basic configuration file has to support arbitrary settings to be passed to these tools (depending on what each tool can interpret).
3. Workspace creation from histograms
Taking the example of pyhf, the workspace creation consists of plugging histograms into the right places in a JSON file. This can be relatively straightforward if the configuration file is very explicit about these assignments. In practice, it is desirable to support convenience options in the configuration file. An example is the ability to de-correlate the effect of a systematic uncertainty across different phase space regions via a simple flag. This means that instead of one nuisance parameter, many nuisance parameters need to be created automatically. The treatment can become complicated when many such convenience functions interact with each other.
A possible approach is to define a lowest level configuration file format that supports no convenience functions at all and everything specified in a very explicit manner. Convenience functions could be supported in small frameworks that can read configuration files containing flags for convenience functions, and those small frameworks could convert the configuration file into the low level format.
The basic task of building the workspace should have well-defined inputs (low-level configuration file) and outputs (such as HistFactory workspaces). Support of convenience functions can be factored out, with a well-defined output (low-level configuration file) and input given by an enhanced configuration file format.
4. Inference from workspace
Inference happens via fits of the workspace, to obtain best-fit results and uncertainties, limits on parameters, significances of observations and so on. External tools are called to perform inference, configured as specified by the configuration file.
5. Visualization
Some information of relevant kinds of visualization is provided in as-user-facing/fit-visualization.md and the links therein.
Configuration file thoughts
Grouping of options
The configuration file is how analyzers specify their fit model. Experience shows that it can get complex quickly. It is desirable to group configuration settings in ways that can make the file easier to read. For example, the color with which to draw a sample in figures does not matter for the fit model. It should be possible to easily hide such options for easier inspection of the configuration file, and this could be achieved by grouping them together as "cosmetics".
Validation
As much as possible, automatic checks of the configuration file structure and content should happen before running any computationally expensive steps. For example, if input data is declared to be at various different locations, a quick check could verify that indeed data can be found at the paths declared. This can quickly flag typos before any histogram production is run.
Interactions with other existing frameworks
While ambitious, it would be great to be able to translate configurations of other existing frameworks into a cabinetry
configuration, to be able to easily run detailed comparisons.
Some relevant work for TRExFitter exists here.
Where to specify file paths
Events for a given histogram are located at some path that can be specified by the sample name, region name, and systematic variation. It is unclear how to support as many structures as possible, while limiting the amount of options needed to specify them. See the related issue #16.
Single-element lists
In multiple places in the config, lists of samples, regions, systematics etc. are needed. These could look like this:
"Samples": ["ABC", "DEF"]
For cases where only a single entry is needed, it could either still be written as a single-element list, or alternatively as
"Samples": "ABC"
which turns the value into a string instead. It is desirable to have consistency. During config parsing, everything could be put into a list as needed, or the code further downstream could handle both possible cases. While forcing the user to write everything as a list might result in less aesthetically pleasing results,
"Samples": ["ABC"]
this still might be the best solution overall, as it also prevents other tools using the same config from having to manually implement the parsing of different types of values.
Reserved values for convenience
For a systematic uncertainty affecting all existing samples, it might be convenient to support a setting like "Samples": "ALL"
.
This requires reserving such keywords, no samples could be allowed to have this name.
Scope
Traditional binned template fits in HistFactory style are substantially easier to support than the open world of binned and unbinned models. Likelihood-free inference approaches in the style of MadMiner have a more well-defined scope than the open world of RooFit, and might be easier to integrate.
Code
Currently experimenting with a functional approach.
This may or may not change in the future.
Everything in cabinetry/contrib
are very basic implementation of tasks that should be done by other tools, and interfaces to those tools should be added.
The basic implementations that exist there help with API design.
Acknowledgements
This work was supported by the U.S. National Science Foundation (NSF) cooperative agreement OAC-1836650 (IRIS-HEP).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file cabinetry-0.0.3.tar.gz
.
File metadata
- Download URL: cabinetry-0.0.3.tar.gz
- Upload date:
- Size: 20.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/42.0.1.post20191125 requests-toolbelt/0.9.1 tqdm/4.39.0 CPython/3.7.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e6e962c88933e43d4888487ee73aab1f11e52317630f7e78fbb295cd37617ead |
|
MD5 | 0401bd84ff3263db81f52759a5d8b0b2 |
|
BLAKE2b-256 | 9ae1d87097d1ee799a51650c914d15ce4299e575bae391a4efa52908e8f0cf59 |
File details
Details for the file cabinetry-0.0.3-py3-none-any.whl
.
File metadata
- Download URL: cabinetry-0.0.3-py3-none-any.whl
- Upload date:
- Size: 21.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/42.0.1.post20191125 requests-toolbelt/0.9.1 tqdm/4.39.0 CPython/3.7.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2dc778ca6d14d1e0f208669629e58dbdc410ec1b4e109a07a2911682032f5ce0 |
|
MD5 | 2fe7a8eabafcfd644bcc09fb6d077783 |
|
BLAKE2b-256 | 8898d26ea07a51d0836b32990710ebbd04fa91a198569bf23e16e0374a9bb0cc |