Skip to main content

HuggingFace community-driven open-source library of datasets

Project description



Build GitHub Documentation GitHub release Number of datasets Contributor Covenant DOI

🤗 Datasets is a lightweight library providing two main features:

  • one-line dataloaders for many public datasets: one liners to download and pre-process any of the number of datasets major public datasets (in 467 languages and dialects!) provided on the HuggingFace Datasets Hub. With a simple command like squad_dataset = load_dataset("squad"), get any of these datasets ready to use in a dataloader for training/evaluating a ML model (Numpy/Pandas/PyTorch/TensorFlow/JAX),
  • efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV/JSON/text. With simple commands like tokenized_dataset = dataset.map(tokenize_example), efficiently prepare the dataset for inspection and ML model evaluation and training.

🎓 Documentation 🕹 Colab tutorial

🔎 Find a dataset in the Hub 🌟 Add a new dataset to the Hub

🤗 Datasets also provides access to +15 evaluation metrics and is designed to let the community easily add and share new datasets and evaluation metrics.

🤗 Datasets has many additional interesting features:

  • Thrive on large datasets: 🤗 Datasets naturally frees the user from RAM memory limitation, all datasets are memory-mapped using an efficient zero-serialization cost backend (Apache Arrow).
  • Smart caching: never wait for your data to process several times.
  • Lightweight and fast with a transparent and pythonic API (multi-processing/caching/memory-mapping).
  • Built-in interoperability with NumPy, pandas, PyTorch, Tensorflow 2 and JAX.

🤗 Datasets originated from a fork of the awesome TensorFlow Datasets and the HuggingFace team want to deeply thank the TensorFlow Datasets team for building this amazing library. More details on the differences between 🤗 Datasets and tfds can be found in the section Main differences between 🤗 Datasets and tfds.

Installation

With pip

🤗 Datasets can be installed from PyPi and has to be installed in a virtual environment (venv or conda for instance)

pip install datasets

With conda

🤗 Datasets can be installed using conda as follows:

conda install -c huggingface -c conda-forge datasets

Follow the installation pages of TensorFlow and PyTorch to see how to install them with conda.

For more details on installation, check the installation page in the documentation: https://huggingface.co/docs/datasets/installation.html

Installation to use with PyTorch/TensorFlow/pandas

If you plan to use 🤗 Datasets with PyTorch (1.0+), TensorFlow (2.2+) or pandas, you should also install PyTorch, TensorFlow or pandas.

For more details on using the library with NumPy, pandas, PyTorch or TensorFlow, check the quick tour page in the documentation: https://huggingface.co/docs/datasets/quicktour.html

Usage

🤗 Datasets is made to be very simple to use. The main methods are:

  • datasets.list_datasets() to list the available datasets
  • datasets.load_dataset(dataset_name, **kwargs) to instantiate a dataset
  • datasets.list_metrics() to list the available metrics
  • datasets.load_metric(metric_name, **kwargs) to instantiate a metric

Here is a quick example:

from datasets import list_datasets, load_dataset, list_metrics, load_metric

# Print all the available datasets
print(list_datasets())

# Load a dataset and print the first example in the training set
squad_dataset = load_dataset('squad')
print(squad_dataset['train'][0])

# List all the available metrics
print(list_metrics())

# Load a metric
squad_metric = load_metric('squad')

# Process the dataset - add a column with the length of the context texts
dataset_with_length = squad_dataset.map(lambda x: {"length": len(x["context"])})

# Process the dataset - tokenize the context texts (using a tokenizer from the 🤗 Transformers library)
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')

tokenized_dataset = squad_dataset.map(lambda x: tokenizer(x['context']), batched=True)

For more details on using the library, check the quick tour page in the documentation: https://huggingface.co/docs/datasets/quicktour.html and the specific pages on:

Another introduction to 🤗 Datasets is the tutorial on Google Colab here: Open In Colab

Add a new dataset to the Hub

We have a very detailed step-by-step guide to add a new dataset to the number of datasets datasets already provided on the HuggingFace Datasets Hub.

You will find the step-by-step guide here to add a dataset to this repository.

You can also have your own repository for your dataset on the Hub under your or your organization's namespace and share it with the community. More information in the documentation section about dataset sharing.

Main differences between 🤗 Datasets and tfds

If you are familiar with the great TensorFlow Datasets, here are the main differences between 🤗 Datasets and tfds:

  • the scripts in 🤗 Datasets are not provided within the library but are queried, downloaded/cached and dynamically loaded upon request
  • 🤗 Datasets also provides evaluation metrics in a similar fashion to the datasets, i.e. as dynamically installed scripts with a unified API. This gives access to the pair of a benchmark dataset and a benchmark metric for instance for benchmarks like SQuAD or GLUE.
  • the backend serialization of 🤗 Datasets is based on Apache Arrow instead of TF Records and leverage python dataclasses for info and features with some diverging features (we mostly don't do encoding and store the raw data as much as possible in the backend serialization cache).
  • the user-facing dataset object of 🤗 Datasets is not a tf.data.Dataset but a built-in framework-agnostic dataset class with methods inspired by what we like in tf.data (like a map() method). It basically wraps a memory-mapped Arrow table cache.

Disclaimers

Similar to TensorFlow Datasets, 🤗 Datasets is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use them. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!

BibTeX

If you want to cite our 🤗 Datasets paper and library, you can use these:

@misc{lhoest2021datasets,
      title={Datasets: A Community Library for Natural Language Processing},
      author={Quentin Lhoest and Albert Villanova del Moral and Yacine Jernite and Abhishek Thakur and Patrick von Platen and Suraj Patil and Julien Chaumond and Mariama Drame and Julien Plu and Lewis Tunstall and Joe Davison and Mario Šaško and Gunjan Chhablani and Bhavitvya Malik and Simon Brandeis and Teven Le Scao and Victor Sanh and Canwen Xu and Nicolas Patry and Angelina McMillan-Major and Philipp Schmid and Sylvain Gugger and Clément Delangue and Théo Matussière and Lysandre Debut and Stas Bekman and Pierric Cistac and Thibault Goehringer and Victor Mustar and François Lagunas and Alexander M. Rush and Thomas Wolf},
      year={2021},
      eprint={2109.02846},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
@software{quentin_lhoest_2021_5570305,
  author       = {Quentin Lhoest and
                  Albert Villanova del Moral and
                  Patrick von Platen and
                  Thomas Wolf and
                  Yacine Jernite and
                  Abhishek Thakur and
                  Lewis Tunstall and
                  Suraj Patil and
                  Mariama Drame and
                  Julien Chaumond and
                  Julien Plu and
                  Joe Davison and
                  Simon Brandeis and
                  Victor Sanh and
                  Teven Le Scao and
                  Kevin Canwen Xu and
                  Nicolas Patry and
                  Steven Liu and
                  Angelina McMillan-Major and
                  Philipp Schmid and
                  Sylvain Gugger and
                  Nathan Raw and
                  Sylvain Lesage and
                  Anton Lozhkov and
                  Matthew Carrigan and
                  Théo Matussière and
                  Leandro von Werra and
                  Lysandre Debut and
                  Stas Bekman and
                  Clément Delangue},
  title        = {huggingface/datasets: 1.13.2},
  month        = oct,
  year         = 2021,
  publisher    = {Zenodo},
  version      = {1.13.2},
  doi          = {10.5281/zenodo.5570305},
  url          = {https://doi.org/10.5281/zenodo.5570305}
}

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datasets-1.14.0.tar.gz (256.0 kB view details)

Uploaded Source

Built Distribution

datasets-1.14.0-py3-none-any.whl (290.4 kB view details)

Uploaded Python 3

File details

Details for the file datasets-1.14.0.tar.gz.

File metadata

  • Download URL: datasets-1.14.0.tar.gz
  • Upload date:
  • Size: 256.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.8.8

File hashes

Hashes for datasets-1.14.0.tar.gz
Algorithm Hash digest
SHA256 102bffbccb84b647e373bc27661720f87e05ba69b1ba526f3b42b0106eda8341
MD5 aeace384a056aff05e879e5d722df6a3
BLAKE2b-256 6668a1c466df21ca8b674a0f24bea80f30a3c6b2d6b0b6c6c2278fbc2766c6ca

See more details on using hashes here.

File details

Details for the file datasets-1.14.0-py3-none-any.whl.

File metadata

  • Download URL: datasets-1.14.0-py3-none-any.whl
  • Upload date:
  • Size: 290.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.6.1 pkginfo/1.7.1 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.8.8

File hashes

Hashes for datasets-1.14.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c16f2c164486c4b33545840a002f00b63238921b961b9aec04961b02de216564
MD5 7b60a1c3bffc2025639ea17374cbeff4
BLAKE2b-256 5c7a7981150365e835e8326ef915a48edac1a57ca43033f5ee4e6ea247dfac5d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page