Skip to main content

Facebook AI Research Sequence-to-Sequence Toolkit

Project description

Introduction

Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks. It provides reference implementations of various sequence-to-sequence models, including:

Fairseq features:

  • multi-GPU (distributed) training on one machine or across multiple machines
  • fast generation on both CPU and GPU with multiple search algorithms implemented:
  • large mini-batch training even on a single GPU via delayed updates
  • mixed precision training (trains faster with less GPU memory on NVIDIA tensor cores)
  • extensible: easily register new models, criterions, tasks, optimizers and learning rate schedulers

We also provide pre-trained models for several benchmark translation and language modeling datasets.

Model

Requirements and Installation

  • PyTorch version >= 1.0.0
  • Python version >= 3.5
  • For training new models, you'll also need an NVIDIA GPU and NCCL

Please follow the instructions here to install PyTorch: https://github.com/pytorch/pytorch#installation.

If you use Docker make sure to increase the shared memory size either with --ipc=host or --shm-size as command line options to nvidia-docker run.

After PyTorch is installed, you can install fairseq with pip:

pip install fairseq

Installing from source

To install fairseq from source and develop locally:

git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable .

Improved training speed

Training speed can be further improved by installing NVIDIA's apex library with the --cuda_ext option. fairseq will automatically switch to the faster modules provided by apex.

Getting Started

The full documentation contains instructions for getting started, training new models and extending fairseq with new model types and tasks.

Pre-trained models and examples

We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below, as well as example training and evaluation commands.

We also have more detailed READMEs to reproduce results from specific papers:

Join the fairseq community

License

fairseq(-py) is BSD-licensed. The license applies to the pre-trained models as well. We also provide an additional patent grant.

Citation

Please cite as:

@inproceedings{ott2019fairseq,
  title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
  author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
  booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
  year = {2019},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fairseq-0.7.1.tar.gz (186.3 kB view details)

Uploaded Source

File details

Details for the file fairseq-0.7.1.tar.gz.

File metadata

  • Download URL: fairseq-0.7.1.tar.gz
  • Upload date:
  • Size: 186.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.12.1 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.25.0 CPython/3.6.6

File hashes

Hashes for fairseq-0.7.1.tar.gz
Algorithm Hash digest
SHA256 559116342b3c11f948ea29eea0d35a82668351c83c058d77800bb88aa6151842
MD5 84f1fb0a46258991bfd0cf2cbeac0bf1
BLAKE2b-256 97b4b37d9ef01891ed1884f7d969d6b2ba4ac3e1adcada34dda3e39c4caac9b9

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page