alignment-handbook

Robust recipes to align language models with human and AI preferences

https://github.com/huggingface/alignment-handbook

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.8%) to scientific vocabulary

Keywords

llm rlhf transformers

Keywords from Contributors

cryptocurrency cryptography jax transformer agents application fine-tuning llamaindex multi-agents rag
Last synced: 6 months ago · JSON representation ·

Repository

Robust recipes to align language models with human and AI preferences

Basic Info
Statistics
  • Stars: 5,336
  • Watchers: 108
  • Forks: 454
  • Open Issues: 91
  • Releases: 0
Topics
llm rlhf transformers
Created over 2 years ago · Last pushed 7 months ago
Metadata Files
Readme License Citation

README.md

🤗 Models & Datasets | 📃 Technical Report

The Alignment Handbook

Robust recipes to continue pretraining and to align language models with human and AI preferences.

What is this?

Just one year ago, chatbots were out of fashion and most people hadn't heard about techniques like Reinforcement Learning from Human Feedback (RLHF) to align language models with human preferences. Then, OpenAI broke the internet with ChatGPT and Meta followed suit by releasing the Llama series of language models which enabled the ML community to build their very own capable chatbots. This has led to a rich ecosystem of datasets and models that have mostly focused on teaching language models to follow instructions through supervised fine-tuning (SFT).

However, we know from the InstructGPT and Llama2 papers that significant gains in helpfulness and safety can be had by augmenting SFT with human (or AI) preferences. At the same time, aligning language models to a set of preferences is a fairly novel idea and there are few public resources available on how to train these models, what data to collect, and what metrics to measure for best downstream performance.

The Alignment Handbook aims to fill that gap by providing the community with a series of robust training recipes that span the whole pipeline.

News 🗞️

  • July 24, 2025: We release the full post-training recipe behind SmolLM3-3B: a state-of-the-art hybrid reasoning model 💭
  • November 21, 2024: We release the recipe for fine-tuning SmolLM2-Instruct.
  • August 18, 2024: We release SmolLM-Instruct v0.2, along with the recipe to fine-tuning small LLMs 💻
  • April 12, 2024: We release Zephyr 141B (A35B), in collaboration with Argilla and Kaist AI, along with the recipe to fine-tune Mixtral 8x22B with ORPO 🪁
  • March 12, 2024: We release StarChat2 15B, along with the recipe to train capable coding assistants 🌟
  • March 1, 2024: We release Zephyr 7B Gemma, which is a new recipe to align Gemma 7B with RLAIF 🔥
  • February 1, 2024: We release a recipe to align open LLMs with Constitutional AI 📜! See the recipe and the blog post for details.
  • January 18, 2024: We release a suite of evaluations of DPO vs KTO vs IPO, see the recipe and the blog post for details.
  • November 10, 2023: We release all the training code to replicate Zephyr-7b-β 🪁! We also release No Robots, a brand new dataset of 10,000 instructions and demonstrations written entirely by skilled human annotators.

Links 🔗

How to navigate this project 🧭

This project is simple by design and mostly consists of:

  • scripts to train and evaluate models. Four steps are included: continued pretraining, supervised-finetuning (SFT) for chat, preference alignment with DPO, and supervised-finetuning with preference alignment with ORPO. Each script supports distributed training of the full model weights with DeepSpeed ZeRO-3, or LoRA/QLoRA for parameter-efficient fine-tuning.
  • recipes to reproduce models like Zephyr 7B. Each recipe takes the form of a YAML file which contains all the parameters associated with a single training run. A gpt2-nl recipe is also given to illustrate how this handbook can be used for language or domain adaptation, e.g. by continuing to pretrain on a different language, and then SFT and DPO tuning the result.

We are also working on a series of guides to explain how methods like direct preference optimization (DPO) work, along with lessons learned from gathering human preferences in practice. To get started, we recommend the following:

  1. Follow the installation instructions to set up your environment etc.
  2. Replicate Zephyr-7b-β by following the recipe instructions.

If you would like to train chat models on your own datasets, we recommend following the dataset formatting instructions here.

Contents

The initial release of the handbook will focus on the following techniques:

  • Continued pretraining: adapt language models to a new language or domain, or simply improve it by continued pretraining (causal language modeling) on a new dataset.
  • Supervised fine-tuning: teach language models to follow instructions and tips on how to collect and curate your training dataset.
  • Reward modeling: teach language models to distinguish model responses according to human or AI preferences.
  • Rejection sampling: a simple, but powerful technique to boost the performance of your SFT model.
  • Direct preference optimisation (DPO): a powerful and promising alternative to PPO.
  • Odds Ratio Preference Optimisation (ORPO): a technique to fine-tune language models with human preferences, combining SFT and DPO in a single stage.

Installation instructions

To run the code in this project, first, create a Python virtual environment using e.g. uv:

shell uv venv handbook --python 3.11 && source handbook/bin/activate && uv pip install --upgrade pip

[!TIP] To install uv, follow the UV Installation Guide.

Next, install PyTorch v2.6.0

shell uv pip install torch==2.6.0 --index-url https://download.pytorch.org/whl/cu126

Note that the precise version is important for reproducibility! Since this is hardware-dependent, we also direct you to the PyTorch Installation Page.

You can then install the remaining package dependencies as follows:

shell uv pip install .

You will also need Flash Attention 2 installed, which can be done by running:

shell uv pip install "flash-attn==2.7.4.post1" --no-build-isolation

Next, log into your Hugging Face account as follows:

shell huggingface-cli login

Finally, install Git LFS so that you can push models to the Hugging Face Hub:

shell sudo apt-get install git-lfs

You can now check out the scripts and recipes directories for instructions on how to train some models 🪁!

Project structure

├── LICENSE ├── Makefile <- Makefile with commands like `make style` ├── README.md <- The top-level README for developers using this project ├── recipes <- Recipe configs, accelerate configs, slurm scripts ├── scripts <- Scripts to train and evaluate chat models ├── setup.cfg <- Installation config (mostly used for configuring code quality & tests) ├── setup.py <- Makes project pip installable (pip install -e .) so `alignment` can be imported ├── src <- Source code for use in this project └── tests <- Unit tests

Citation

If you find the content of this repo useful in your work, please cite it as follows via \usepackage{biblatex}:

bibtex @software{Tunstall_The_Alignment_Handbook, author = {Tunstall, Lewis and Beeching, Edward and Lambert, Nathan and Rajani, Nazneen and Huang, Shengyi and Rasul, Kashif and Bartolome, Alvaro, and M. Patiño, Carlos and M. Rush, Alexander and Wolf, Thomas}, license = {Apache-2.0}, title = {{The Alignment Handbook}}, url = {https://github.com/huggingface/alignment-handbook}, version = {0.4.0.dev0} }

Owner

  • Name: Hugging Face
  • Login: huggingface
  • Kind: organization
  • Location: NYC + Paris

The AI community building the future.

Citation (CITATION.cff)

cff-version: 1.2.0
title: The Alignment Handbook
message: >-
  Robust recipes to align language models with human and AI
  preferences.
type: software
authors:
  - given-names: Lewis
    family-names: Tunstall
  - given-names: Edward
    family-names: Beeching
  - given-names: Nathan
    family-names: Lambert
  - given-names: Nazneen
    family-names: Rajani
  - given-names: Shengyi
    family-names: Huang
  - given-names: Kashif
    family-names: Rasul
  - given-names: Alvaro
    family-names: Bartolome
  - given-names: Alexander
    name-particle: M.
    family-names: Rush
  - given-names: Thomas
    family-names: Wolf
repository-code: 'https://github.com/huggingface/alignment-handbook'
license: Apache-2.0
version: 0.4.0.dev0

GitHub Events

Total
  • Issues event: 18
  • Watch event: 707
  • Delete event: 2
  • Issue comment event: 22
  • Push event: 21
  • Pull request review comment event: 2
  • Pull request review event: 7
  • Pull request event: 18
  • Fork event: 68
  • Create event: 2
Last Year
  • Issues event: 18
  • Watch event: 707
  • Delete event: 2
  • Issue comment event: 22
  • Push event: 21
  • Pull request review comment event: 2
  • Pull request review event: 7
  • Pull request event: 18
  • Fork event: 68
  • Create event: 2

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 103
  • Total Committers: 29
  • Avg Commits per committer: 3.552
  • Development Distribution Score (DDS): 0.515
Past Year
  • Commits: 13
  • Committers: 8
  • Avg Commits per committer: 1.625
  • Development Distribution Score (DDS): 0.615
Top Committers
Name Email Commits
Lewis Tunstall l****l@g****m 50
Kashif Rasul k****l@g****m 6
edbeeching e****g@g****m 6
Alvaro Bartolome a****o@a****o 5
Bram Vanroy 2****y 4
Nathan Azrak 4****z 4
Nathan Lambert n****n@h****o 3
Dragan Milchevski D****i@d****m 2
Chansung Park d****p@g****m 2
Loubna Ben Allal 4****l 2
Costa Huang c****g@o****m 1
Evgenii Zheltonozhskii z****y@g****m 1
Girraj Jangid 3****d 1
Ikko Eltociear Ashimine e****r@g****m 1
Kirill k****n@g****m 1
Kosti k****t@g****m 1
Mikhail Poludin p****k@f****z 1
NielsRogge 4****e 1
Qingqing Cao c****n 1
Remy r****r@g****m 1
Scott Fleming s****n@g****m 1
Sergio Paniego Blanco s****o@g****m 1
Stefano Fiorucci 4****7 1
Thomas Capelle t****e@p****e 1
Traun Leyden t****n@g****m 1
Zizheng Yang 3****g 1
kykim0 k****4@g****m 1
Sebastian Schramm s****m@c****m 1
Sergei Bogdanov i****y@g****m 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 138
  • Total pull requests: 79
  • Average time to close issues: 15 days
  • Average time to close pull requests: 28 days
  • Total issue authors: 101
  • Total pull request authors: 40
  • Average comments per issue: 2.02
  • Average comments per pull request: 1.08
  • Merged pull requests: 58
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 12
  • Pull requests: 15
  • Average time to close issues: 26 days
  • Average time to close pull requests: about 1 month
  • Issue authors: 11
  • Pull request authors: 10
  • Average comments per issue: 0.92
  • Average comments per pull request: 0.47
  • Merged pull requests: 9
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • ChenDRAG (8)
  • shamanez (4)
  • ohmeow (4)
  • alvarobartt (3)
  • liutianlin0121 (3)
  • patchie (3)
  • nathan-az (3)
  • tcapelle (2)
  • iseesaw (2)
  • Michelet-Gaetan (2)
  • AlexiaJM (2)
  • ratterdull78 (2)
  • Harry-mic (2)
  • sowmaster (2)
  • tanliboy (2)
Pull Request Authors
  • lewtun (18)
  • kashif (11)
  • BramVanroy (10)
  • alvarobartt (9)
  • nathan-az (6)
  • loubnabnl (4)
  • kirill-fedyanin (4)
  • deep-diver (4)
  • Ritvik19 (3)
  • snoels (2)
  • eltociear (2)
  • peterschmidt85 (2)
  • antonpolishko (2)
  • Savannah120 (2)
  • cmpatino (2)
Top Labels
Issue Labels
bug (1)
Pull Request Labels
bug (2)

Packages

  • Total packages: 2
  • Total downloads:
    • pypi 137 last-month
  • Total dependent packages: 0
    (may contain duplicates)
  • Total dependent repositories: 0
    (may contain duplicates)
  • Total versions: 6
  • Total maintainers: 1
proxy.golang.org: github.com/huggingface/alignment-handbook
  • Versions: 3
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 6.5%
Average: 6.7%
Dependent repos count: 7.0%
Last synced: 6 months ago
pypi.org: alignment-handbook

The Alignment Handbook

  • Versions: 3
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 137 Last month
Rankings
Dependent packages count: 9.4%
Average: 38.7%
Dependent repos count: 68.1%
Maintainers (1)
Last synced: 6 months ago