Skip to main content

A little wrapper around `uv` to launch ephemeral Jupyter notebooks.

Project description

juv

A toolkit for reproducible Jupyter notebooks, powered by uv.

Features

  • 🗂️ Create, manage, and run reproducible notebooks
  • 📌 Pin dependencies with PEP 723 - inline script metadata
  • 🚀 Launch ephemeral sessions for multiple front ends (e.g., JupyterLab, Notebook, NbClassic)
  • ⚡ Powered by uv for fast dependency management

Installation

juv requires that you have uv v0.4 or later installed.

You can install the juv cli globally:

uv tool install juv

or use the uvx command to invoke it without installing:

uvx juv

Usage

juv should feel familar for uv users. The goal is to extend its dependencies management to Jupyter notebooks.

# create a notebook
juv init notebook.ipynb
juv init --python=3.9 notebook.ipynb # specify a minimum Python version

# add dependencies to the notebook
juv add notebook.ipynb pandas numpy
juv add notebook.ipynb --requirements=requirements.txt

# launch the notebook
juv run notebook.ipynb
juv run --with=polars notebook.ipynb # additional dependencies for this session (not saved)
juv run --jupyter=notebook@6.4.0 notebook.ipynb # pick a specific Jupyter frontend

# JUV_JUPYTER env var to set preferred Jupyter frontend (default: lab)
export JUV_JUPYTER=nbclassic
juv run notebook.ipynb

If a script is provided to run, it will be converted to a notebook before launching the Jupyter session.

uvx juv run script.py
# Converted script to notebook `script.ipynb`
# Launching Jupyter session...

Motivation

Rethinking the "getting started" guide for notebooks

Jupyter notebooks are the de facto standard for data science, yet they suffer from a reproducibility crisis.

This issue is a clear example of how our tools shape our practices, not some fundamental lack of care for reproducibility. In this case, established tools and worflows simply fail to help us fall into the pit of success when it comes to reproducibility with notebooks, particularly with regard to dependency management. Notebooks are much like one-off Python scripts and most often are not a part of a package.

Being a "good steward" of notebooks in this context requires discipline (due to the manual nature of virtual environments) and knowledge of Python packaging - a somewhat unreasonable expectation for domain experts who are focused on solving problems, not software engineering.

You'll often a "getting started" guide in the wild like this:

python -m venv venv
source venv/bin/activate
pip install -r requirements.txt # or just pip install pandas numpy, etc
jupyter lab

Four lines of code, where a few things can go wrong. What version of Python? What package version(s)? What if we forget to activate the virtual environment?

The gold standard for a "getting started" guide should be a single command (i.e, no guide).

<magic tool> run notebook.ipynb

However, this gold standard has long been out of reach for Jupyter notebooks. Why?

First, virtual environments are a leaky abstraction and deeply ingrained in the Python psyche: create, activate, install, run. Their historical "cost" has forced us to treat them as entities that must be managed explicitly. In fact, an entire ecosystem of tooling and best practices are oriented around supporting long-lived environments, rather than more ephemeral. End users separately create and then mutate virtual environments with low-level tools like pip. The manual nature and overhead of these steps encourages sharing environments across projects - a poor practice for reproducibility.

Second, only Python packages could historically specify their dependencies. Lots of data science code lives in notebooks, not packages, and there has not been a way to specify dependencies for standalone scripts without external files (e.g., requirements.txt).

Aligning of the stars

Two key ideas have changed my perspective on this problem and inspired juv:

  • Virtual environments are now "cheap". If you'd asked me a year ago, I would have said virtual environments were a necessary evil. uv is such a departure from the status quo that it forces us to rethink best practices. Environments are now created faster than JupyterLab starts - why keep them around at all?

  • PEP 723. Inline script metadata introduces a standard way to specify dependencies in standalone Python scripts. A single file can now contain everything needed to run it, without relying on external files like requirements.txt or pyproject.toml.

So, what if:

  • Environments were disposable by default?
  • Notebooks could specify their own dependencies?

This is the vision of juv

[!NOTE] Dependency management is just one reproducibility challenge in notebooks (non-linear execution being another). juv aims to solve this specific pain point for the existing ecosystem. I'm personally excited for initiatives that rethink notebooks from the ground up and make this kind of tool obsolete.

How

PEP 723 (inline script metadata) allows specifying dependencies as comments within Python scripts, enabling self-contained, reproducible execution. This feature could significantly improve reproducibility in the data science ecosystem, since many analyses are shared as standalone code (not packages). However, a lot of data science code lives in notebooks (.ipynb files), not Python scripts (.py files).

juv bridges this gap by:

  • Extending PEP 723-style metadata support from uv to Jupyter notebooks
  • Launching Jupyter sessions for various notebook front ends (e.g., JupyterLab, Notebook, NbClassic) with the specified dependencies

It's a simple Python script that parses the notebook and starts a Jupyter session with the specified dependencies (piggybacking on uv's existing functionality).

Alternatives

juv is opinionated and might not suit your preferences. That's ok! uv is super extensible, and I recommend reading the wonderful documentation to learn about its primitives.

For example, you can achieve a similar workflow using the --with-requirements flag:

uvx --with-requirements=requirements.txt --from=jupyter-core --with=jupyterlab jupyter lab notebook.ipynb

While slightly more verbose and breaking self-containment, this approach totally works and saves you from installing another dependency.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

juv-0.2.2.tar.gz (32.0 kB view details)

Uploaded Source

Built Distribution

juv-0.2.2-py3-none-any.whl (11.3 kB view details)

Uploaded Python 3

File details

Details for the file juv-0.2.2.tar.gz.

File metadata

  • Download URL: juv-0.2.2.tar.gz
  • Upload date:
  • Size: 32.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for juv-0.2.2.tar.gz
Algorithm Hash digest
SHA256 0e0c5a7e530017df52f015f06555c9c0c2ba9861291709e6ffb01a1180bf7309
MD5 eb4a23933f61a5428bdf849498679943
BLAKE2b-256 c00b467ccb135a51d7769cf467bb1cf6c0299977cd8c0188e6f5aa38b57215bf

See more details on using hashes here.

File details

Details for the file juv-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: juv-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 11.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for juv-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 5befd6d267133e8ffd39eaae6a580904f7e3b6e5e5c5cb65fc992b18f7bbb312
MD5 80e13b0cd5edd077f3a65a987431cd4c
BLAKE2b-256 a4f041e6671d833ec5ec49fa6a1e2239e7536cc758a0b130b8af501c689100b2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page