Skip to main content

Snakemake-like pipeline manager for reproducible Jupyter Notebooks

Project description

Build Status

Snakemake-like pipelines for Jupyter Notebooks, producing interactive pipeline reports like this:

Install & general remarks

These are still early days of this software so please bear in mind that it is not ready for production yet. Note: for simplicity I assume that you are using a recent Ubuntu with git installed.

pip install nbpipeline

Graphiz is required for static SVG plots:

sudo apt-get install graphviz libgraphviz-dev graphviz-dev

Development install

To install the latest development version you may use:

git clone https://github.com/krassowski/nbpipeline
cd nbpipeline
pip install -r requirements.txt
ln -s $(pwd)/nbpipeline/nbpipeline.py ~/bin/nbpipeline

Quickstart

Create pipeline.py file with list of rules for your pipeline. For example:

from nbpipeline.rules import NotebookRule


NotebookRule(
    'Extract protein data',  # a nice name for the step
    input={'protein_data_path': 'data/raw/data_from_wetlab.xlsx'},
    output={'output_path': 'data/clean/protein_levels.csv'},
    notebook='analyses/Data_extraction.ipynb',
    group='Proteomics'  # this is optional
)

NotebookRule(
    'Quality control and PCA on proteins',
    input={'protein_levels_path': 'data/clean/protein_levels.csv'},
    output={'qc_report_path': 'reports/proteins_failing_qc.csv'},
    notebook='analyses/Exploration_and_quality_control.ipynb',
    group='Proteomics'
)

the keys of the input and output variables should correspond to variables in one of the first cells in the corresponding notebook, which should be tagged as “parameters”. You will be warned if your notebook has no cell tagged as “parameters”.

For more details, please see the example pipeline and notebooks in the examples directory.

Run the pipeline:

nbpipeline

On any consecutive run the notebooks which did not change will not be run again. To disable this cache, use --disable_cache switch.

To generate an interactive diagram of the rules graph, together with reproducibility report add -i switch:

nbpipeline -i

The software defaults to google-chrome for graph visualization display, which can be changed with a CLI option.

If you named your definition files differently (e.g. my_rules.py instead of pipeline.py), use:

nbpipeline --definitions_file my_rules.py

To display all command line options use:

nbpipeline -h

Troubleshooting

If you see ModuleNotFoundError: No module named 'name_of_your_local_module', you may need to enforce the path, running nbpipeline with:

PYTHONPATH=/path/to/the/parent/of/local/module:$PYTHONPATH nbpipeline

Oftentimes the path is the same as the current directory, so the following command may work:

PYTHONPATH=$(pwd):$PYTHONPATH nbpipeline

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

nbpipeline-0.2.0.tar.gz (21.4 kB view details)

Uploaded Source

File details

Details for the file nbpipeline-0.2.0.tar.gz.

File metadata

  • Download URL: nbpipeline-0.2.0.tar.gz
  • Upload date:
  • Size: 21.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.43.0 CPython/3.8.1

File hashes

Hashes for nbpipeline-0.2.0.tar.gz
Algorithm Hash digest
SHA256 6b9a6aa445dff9c605db0afcbd0f95e206c465ea0fcf9c35385a5506bfd49fbd
MD5 9e898de2d36b81d9db6226d1262fefe1
BLAKE2b-256 aca6fd7859414c8d0f482f49bf3abc6ddd8367d2406696761f0d60a05fa259cc

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page