Facebook AI Research Sequence-to-Sequence Toolkit
Project description
Introduction
Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks. It provides reference implementations of various sequence-to-sequence models, including:
- Convolutional Neural Networks (CNN)
- Dauphin et al. (2017): Language Modeling with Gated Convolutional Networks
- Gehring et al. (2017): Convolutional Sequence to Sequence Learning
- Edunov et al. (2018): Classical Structured Prediction Losses for Sequence to Sequence Learning
- Fan et al. (2018): Hierarchical Neural Story Generation
- New wav2vec: Unsupervised Pre-training for Speech Recognition (Schneider et al., 2019)
- LightConv and DynamicConv models
- Long Short-Term Memory (LSTM) networks
- Transformer (self-attention) networks
- Vaswani et al. (2017): Attention Is All You Need
- Ott et al. (2018): Scaling Neural Machine Translation
- Edunov et al. (2018): Understanding Back-Translation at Scale
- New Baevski and Auli (2018): Adaptive Input Representations for Neural Language Modeling
- New Shen et al. (2019): Mixture Models for Diverse Machine Translation: Tricks of the Trade
Fairseq features:
- multi-GPU (distributed) training on one machine or across multiple machines
- fast generation on both CPU and GPU with multiple search algorithms implemented:
- beam search
- Diverse Beam Search (Vijayakumar et al., 2016)
- sampling (unconstrained and top-k)
- large mini-batch training even on a single GPU via delayed updates
- mixed precision training (trains faster with less GPU memory on NVIDIA tensor cores)
- extensible: easily register new models, criterions, tasks, optimizers and learning rate schedulers
We also provide pre-trained models for several benchmark translation and language modeling datasets.
Requirements and Installation
- PyTorch version >= 1.0.0
- Python version >= 3.5
- For training new models, you'll also need an NVIDIA GPU and NCCL
Please follow the instructions here to install PyTorch: https://github.com/pytorch/pytorch#installation.
If you use Docker make sure to increase the shared memory size either with
--ipc=host
or --shm-size
as command line options to nvidia-docker run
.
After PyTorch is installed, you can install fairseq with pip
:
pip install fairseq
Installing from source
To install fairseq from source and develop locally:
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable .
Improved training speed
Training speed can be further improved by installing NVIDIA's
apex library with the --cuda_ext
option.
fairseq will automatically switch to the faster modules provided by apex.
Getting Started
The full documentation contains instructions for getting started, training new models and extending fairseq with new model types and tasks.
Pre-trained models and examples
We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below, as well as example training and evaluation commands.
- Translation: convolutional and transformer models are available
- Language Modeling: convolutional models are available
We also have more detailed READMEs to reproduce results from specific papers:
- Schneider et al. (2019): wav2vec: Unsupervised Pre-training for Speech Recognition
- Shen et al. (2019) Mixture Models for Diverse Machine Translation: Tricks of the Trade
- Wu et al. (2019): Pay Less Attention with Lightweight and Dynamic Convolutions
- Edunov et al. (2018): Understanding Back-Translation at Scale
- Edunov et al. (2018): Classical Structured Prediction Losses for Sequence to Sequence Learning
- Fan et al. (2018): Hierarchical Neural Story Generation
- Ott et al. (2018): Scaling Neural Machine Translation
- Gehring et al. (2017): Convolutional Sequence to Sequence Learning
- Dauphin et al. (2017): Language Modeling with Gated Convolutional Networks
Join the fairseq community
- Facebook page: https://www.facebook.com/groups/fairseq.users
- Google group: https://groups.google.com/forum/#!forum/fairseq-users
License
fairseq(-py) is BSD-licensed. The license applies to the pre-trained models as well. We also provide an additional patent grant.
Citation
Please cite as:
@inproceedings{ott2019fairseq,
title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
year = {2019},
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file fairseq-0.7.1.tar.gz
.
File metadata
- Download URL: fairseq-0.7.1.tar.gz
- Upload date:
- Size: 186.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.12.1 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.25.0 CPython/3.6.6
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 559116342b3c11f948ea29eea0d35a82668351c83c058d77800bb88aa6151842 |
|
MD5 | 84f1fb0a46258991bfd0cf2cbeac0bf1 |
|
BLAKE2b-256 | 97b4b37d9ef01891ed1884f7d969d6b2ba4ac3e1adcada34dda3e39c4caac9b9 |