Skip to main content

The Alignment Handbook

Project description

The Alignment Handbook

Robust recipes to align language models with human and AI preferences.

What is this?

Just one year ago, chatbots were out of fashion and most people hadn't heard about techniques like Reinforcement Learning from Human Feedback (RLHF) to align language models with human preferences. Then, OpenAI broke the internet with ChatGPT and Meta followed suit by releasing the Llama series of language models which enabled the ML community to build their very own capable chatbots. This has led to a rich ecosystem of datasets and models that have mostly focused on teaching language models to follow instructions through supervised fine-tuning (SFT).

However, we know from the InstructGPT and Llama2 papers that significant gains in helpfulness and safety can be had by augmenting SFT with human (or AI) preferences. At the same time, aligning language models to a set of preferences is a fairly novel idea and there are few public resources available on how to train these models, what data to collect, and what metrics to measure for best downstream performance.

The Alignment Handbook aims to fill that gap by providing the community with a series of robust training recipes that span the whole pipeline.

Contents

The initial release of the handbook will focus on the following techniques:

  • Supervised fine-tuning: teach language models to follow instructions and tips on how to collect and curate your own training dataset.
  • Reward modeling: teach language models to distinguish model responses according to human or AI preferences.
  • Rejection sampling: a simple, but powerful technique to boost the performance of your SFT model.
  • Direct preference optimisation (DPO): a powerful and promising alternative to PPO.

Getting started

To run the code in this project, first create a Python virtual environment using e.g. Conda:

conda create -n handbook python=3.10 && conda activate handbook

Next, install PyTorch v2.1.0. Since this hardware-dependent, we direct you to the PyTorch Installation Page.

Once PyTorch is installed, you can install the remaining package dependencies as follows:

pip install .

Next, log into your Hugging Face account as follows:

huggingface-cli login

Finally, install Git LFS so that you can push models to the Hugging Face Hub:

sudo apt-get install git-lfs

Citation

If you find the content of this repo useful in your work, please cite it as follows:

@misc{alignment_handbook2023,
  author = {Lewis Tunstall and Edward Beeching and Nathan Lambert and Nazneen Rajani and Sasha Rush and Thomas Wolf},
  title = {The Alignment Handbook},
  year = {2023},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/huggingface/alignment-handbook}}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

alignment-handbook-0.1.0.tar.gz (8.8 kB view details)

Uploaded Source

Built Distribution

alignment_handbook-0.1.0-py3-none-any.whl (7.5 kB view details)

Uploaded Python 3

File details

Details for the file alignment-handbook-0.1.0.tar.gz.

File metadata

  • Download URL: alignment-handbook-0.1.0.tar.gz
  • Upload date:
  • Size: 8.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.13

File hashes

Hashes for alignment-handbook-0.1.0.tar.gz
Algorithm Hash digest
SHA256 58d10785bdd0adc5928a1f309137017c0a00253714e68e98f2affc460c1d6ce1
MD5 147af783814990e4867c1d38bdf1918a
BLAKE2b-256 e265ebaeed528c15dbebf55ba693d742567cdb5d341c6761631de466b87010d6

See more details on using hashes here.

File details

Details for the file alignment_handbook-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for alignment_handbook-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 33de22f1ebf35d06fa5ddfb6f79b81adc85f1a5f48654ac1092166830201a34b
MD5 f4df8ba34d601072e7c1d25875e6f5ba
BLAKE2b-256 6e725fed130b77349e2713356308ce4dd427d5c84c425b2a5d24a8ed9c37d5de

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page