Skip to main content

MLCommons datasets format.

Project description

mlcroissant 🥐

Discover mlcroissant 🥐 with this introduction tutorial in Google Colab.

Python requirements

Python version >= 3.10.

If you do not have a Python environment:

python3 -m venv ~/py3
source ~/py3/bin/activate

Install

python -m pip install ".[dev]"

Verify/load a Croissant dataset

python scripts/validate.py --file ../../datasets/titanic/metadata.json

The command:

  • Exits with 0, prints Done and displays encountered warnings, when no error was found in the file.
  • Exits with 1 and displays all encountered errors/warnings, otherwise.

Similarly, you can generate a dataset by launching:

python scripts/load.py \
    --file ../../datasets/titanic/metadata.json \
    --record_set passengers \
    --num_records 10

Programmatically build JSON-LD files

You can programmatically build Croissant JSON-LD files using the Python API.

import mlcroissant as mlc
metadata=mlc.nodes.Metadata(
  name="...",
)
metadata.to_json()  # this returns the JSON-LD file.

For a full working example, refer to the script to convert Hugging Face datasets to Croissant files. This script uses the Python API to programmatically build JSON-LD files.

Run tests

All tests can be run from the Makefile:

make tests

Design

The most important modules in the library are:

  • mlcroissant/_src/structure_graph is responsible for the static analysis of the Croissant files. We convert Croissant files to a Python representation called "structure graph" (using NetworkX). In the process, we catch any static analysis issues (e.g., a missing mandatory field or a logic problem in the file).
  • mlcroissant/_src/operation_graph is responsible for the dynamic analysis of the Croissant files (i.e., actually loading the dataset by yielding examples). We convert the structure graph into an "operation graph". Operations are the unit transformations that allow to build the dataset (like Download, Extract, etc).

Other important modules are:

For the full design, refer to the design doc for an overview of the implementation.

Contribute

All contributions are welcome! We even have good first issues to start in the project. Refer to the GitHub project for more detailed user stories and read above how the repo is designed.

An easy way to contribute to mlcroissant is using Croissant's configured codespaces. To start a codespace:

  • On Croissant's main repo page, click on the <Code> button and select the Codespaces tab. You can start a new codespace by clicking on the + sign on the left side of the tab. By default, the codespace will start on Croissant's main branch, unless you select otherwise from the branches drop-down menu on the left side.
  • While building the environment, your codespaces will install all mlcroissant's required dependencies - so that you can start coding right away! Of course, you can further personalize your codespace.
  • To start contributing to Croissant:
    • Create a new branch from the Terminal tab in the bottom panel of your codespace with git checkout -b feature/my-awesome-new-feature
    • You can create new commits, and run most git commands from the Source Control tab in the left panel of your codespace. Alternatively, use the Terminal in the bottom panel of your codespace.
    • Iterate on your code until all tests are green (you can run tests with make pytest or form the Tests tab in the left panel of your codespace).
    • Open a pull request (PR) with the main branch of https://github.com/mlcommons/croissant, and ask for feedback!

Alternatively, you can contribute to mlcroissant using the "classic" GitHub workflow:

Debug

You can debug the validation of the file using the --debug flag:

python scripts/validate.py --file ../../datasets/titanic/metadata.json --debug

This will:

  1. print extra information, like the generated nodes;
  2. save the generated structure graph to a folder indicated in the logs.

Publishing wheels

Publishing is done manually. We are in the process of setting up an automatic deployment with GitHub Actions.

  1. Bump the version in croissant/python/mlcroissant/pyproject.toml.
  2. Build locally:
python -m build
  1. Upload to pypi.org.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mlcroissant-0.0.2.tar.gz (54.5 kB view details)

Uploaded Source

Built Distribution

mlcroissant-0.0.2-py2.py3-none-any.whl (83.8 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file mlcroissant-0.0.2.tar.gz.

File metadata

  • Download URL: mlcroissant-0.0.2.tar.gz
  • Upload date:
  • Size: 54.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.7

File hashes

Hashes for mlcroissant-0.0.2.tar.gz
Algorithm Hash digest
SHA256 9cf6719efa4fa55b59b6fa27952f4184f187579f9e37f37113525b102027fbc8
MD5 c8aaae47028d01f888fc0a3192ec5f5b
BLAKE2b-256 0b9c7f306b1ed2d6d3b06b7308bffcf4d7e33b515ce2bcc5113eff08bc21858c

See more details on using hashes here.

File details

Details for the file mlcroissant-0.0.2-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for mlcroissant-0.0.2-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 b01ef6fff822791da06e621c186901ce65dc428e773bc18f5b498a7ad371b3ce
MD5 5ab5232944006d1bad101bd50286ca51
BLAKE2b-256 563b199a7102bfab47bc7effdaba0cfdb3ff847d1d7115e849eae9fb87172b0e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page