Skip to main content

LaminDB: Manage R&D data & analyses.

Project description

Stars codecov pypi

LaminDB: Data lake for biology

LaminDB is an API layer for your existing infrastructure to manage your existing data & analyses.

Public beta: Currently only recommended for collaborators as we still make breaking changes.

Update 2023-06-14:

- We completed a major migration from SQLAlchemy/SQLModel to Django, available in 0.42.0.
- The last version that is fully compatible with SQLAlchemy/SQLModel is 0.41.2.

Features

Free:

  • Track data lineage across notebooks, pipelines & apps.
  • Manage biological registries, ontologies & features.
  • Persist, load & stream data objects with a single line of code.
  • Query for anything, define & manage custom schemas.
  • Manage data on your laptop, server or cloud infra.
  • Use a mesh of distributed LaminDB instances for different teams and purposes.
  • Share instances through a Hub akin to GitHub.

Enterprise:

  • Explore & share data, submit samples & track lineage with LaminApp (deployable in your infra).
  • Receive support, code templates & services for a BioTech data & analytics platform.

How does it work?

LaminDB builds semantics of R&D and biology onto well-established tools:

  • SQLite & Postgres for SQL databases using Django ORM (previously: SQLModel)
  • S3, GCP & local storage for object storage using fsspec
  • Configurable storage formats: pyarrow, anndata, zarr, etc.
  • Biological knowledge sources & ontologies: see Bionty

LaminDB is open source. For details, see Architecture.

Installation

pyversions

pip install lamindb  # basic data lake
pip install 'lamindb[bionty]'  # biological entities
pip install 'lamindb[nbproject]'  # Jupyter notebook tracking
pip install 'lamindb[aws]'  # AWS dependencies (s3fs, etc.)
pip install 'lamindb[gcp]'  # GCP dependencies (gcfs, etc.)

Quick setup

Why do I have to sign up?

  • Data lineage requires a user identity (who modified which data when?).
  • Collaboration requires a user identity (who shares this with me?).

Signing up takes 1 min.

We do not store any of your data, but only basic metadata about you (email address, etc.) & your instances (S3 bucket names, etc.).

  • Sign up: lamin signup <email>
  • Log in: lamin login <handle>
  • Init a data lake: lamin init --storage <default-storage-location>

Usage overview

import lamindb as ln

Store & load data artifacts

Store a DataFrame or an AnnData in default local or cloud storage:

df = pd.DataFrame({"feat1": [1, 2], "feat2": [3, 4]})

ln.File(df, name="My dataframe").save()  # create a File object and save it

Get it back:

file = ln.File.select(name="My dataframe").one()  # query for it
df = file.load()  # load it into memory

Track & query data lineage

ln.File.select(created_by__handle="lizlemon").df()   # all files ingested by lizlemon
ln.File.select().order_by("-updated_at").first()  # latest updated file

Notebooks

Track a Jupyter Notebook:

ln.track()  # auto-detect notebook metadata, save as a Transform, create a Run
ln.File("my_artifact.parquet").save()  # this file is an output of the notebook run

When you query this file later on, you'll always know where it came from:

file = ln.File.select(name="my_artifact.parquet").one()
file.transform  # gives you the notebook with title, filename, version, id, etc.
file.run  # gives you the run of the notebook that created the file

Of course, you can also query for notebooks:

transforms = ln.Transform.select(  # all notebooks with 'T cell' in the title created in 2022
    name__contains="T cell", type="notebook", created_at__year=2022
).all()
ln.File.select(transform__in=transforms).all()  # data artifacts created by these notebooks

Pipelines

To save a pipeline (complementary to workflow tools) to the Transform registry, call

ln.Transform(name="Awesom-O", version="0.41.2").save()  # save a pipeline

To track a run of a registered pipeline:

transform = ln.Transform.select(name="Awesom-O", version="0.41.2").one()  # select a pipeline from the registry
ln.track(transform)  # create a new global run context
ln.File("s3://my_samples01/my_artifact.fastq.gz").save()  # link file against run & transform

Now, you can query, e.g., for

ln.Run.select(transform__name="Awesom-O").order_by("-created_at").df()  # get the latest pipeline runs

Lookup categoricals with auto-complete

When you're unsure about spellings, use a lookup object:

lookup = ln.Transform.lookup()
ln.Run.select(transform=lookup.awesome_o)

Manage biological registries

lamin init --storage ./myobjects --schema bionty

...

Track biological features

...

Track biological samples

...

Manage custom schemas

  1. Create a GitHub repository with Django ORMs similar to github.com/laminlabs/lnschema-lamin1
  2. Create & deploy migrations via lamin migrate create and lamin migrate deploy

It's fastest if we do this for you based on our templates within an enterprise plan, but you can fully manage the process yourself.

Notebooks

  • Find all guide notebooks here.
  • You can run these notebooks in hosted versions of JupyterLab, e.g., Saturn Cloud, Google Vertex AI, and others or on Google Colab.
  • Jupyter Lab & Notebook offer a fully interactive experience, VS Code & others require using the CLI (lamin track my-notebook.ipynb)

Architecture

LaminDB consists of the lamindb Python package, which builds on a number of open-source packages:

  • bionty: Basic biological entities (usable standalone).
  • lamindb-setup: Setup & configure LaminDB, client for Lamin Hub.
  • lnschema-core: Core schema, ORMs to model data objects & data lineage.
  • lnschema-bionty: Bionty schema, ORMs that are coupled to Bionty's entities.
  • lnschema-lamin1: Exemplary configured schema to track samples, treatments, etc.
  • nbproject: Parse metadata from Jupyter notebooks.

LaminHub & LaminApp are not open-sourced, neither are templates that model lab operations.

Lamin's packages build on the infrastructure listed above. Previously, they were based on SQLAlchemy/SQLModel instead of Django, and cloudpathlib instead of fsspec.

Documentation

Read the docs.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lamindb-0.42.0.tar.gz (180.3 kB view details)

Uploaded Source

Built Distribution

lamindb-0.42.0-py3-none-any.whl (47.9 kB view details)

Uploaded Python 3

File details

Details for the file lamindb-0.42.0.tar.gz.

File metadata

  • Download URL: lamindb-0.42.0.tar.gz
  • Upload date:
  • Size: 180.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: python-requests/2.31.0

File hashes

Hashes for lamindb-0.42.0.tar.gz
Algorithm Hash digest
SHA256 f0c200db57ccfb8f124d2c3fc16642671a8a55934062cfdf6f5dae7cd57131b3
MD5 d6c55c0e3da63dcc3b6ce9764e57eb6f
BLAKE2b-256 386ecc084cc1f37ccafcfbc656047eef00c19c0363c868f8640b1d28ab8b3bd9

See more details on using hashes here.

Provenance

File details

Details for the file lamindb-0.42.0-py3-none-any.whl.

File metadata

  • Download URL: lamindb-0.42.0-py3-none-any.whl
  • Upload date:
  • Size: 47.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: python-requests/2.31.0

File hashes

Hashes for lamindb-0.42.0-py3-none-any.whl
Algorithm Hash digest
SHA256 616071b5e7256707596d5b85c30884defe419fed6ca93c3e37dbde26cd5939a2
MD5 ba54c709e4b4cf98785922b850f61a18
BLAKE2b-256 ad765c32bd5e4beb97fe2a2ea9af8065831df38c603b562a92f33237740ab37b

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page