https://github.com/carperai/trlx

A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)

https://github.com/carperai/trlx

Science Score: 46.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, zenodo.org
  • Committers with academic emails
    3 of 57 committers (5.3%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (8.2%) to scientific vocabulary

Keywords

machine-learning pytorch reinforcement-learning

Keywords from Contributors

transformer audio jax vlm qwen cryptocurrency deepseek gemma cryptography glm
Last synced: 6 months ago · JSON representation

Repository

A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)

Basic Info
  • Host: GitHub
  • Owner: CarperAI
  • License: mit
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 45.6 MB
Statistics
  • Stars: 4,647
  • Watchers: 50
  • Forks: 477
  • Open Issues: 99
  • Releases: 6
Topics
machine-learning pytorch reinforcement-learning
Created over 3 years ago · Last pushed about 2 years ago
Metadata Files
Readme Contributing License Code of conduct

README.md

EMNLP Paper DOI License

Transformer Reinforcement Learning X

trlX is a distributed training framework designed from the ground up to focus on fine-tuning large language models with reinforcement learning using either a provided reward function or a reward-labeled dataset.

Training support for 🤗 Hugging Face models is provided by Accelerate-backed trainers, allowing users to fine-tune causal and T5-based language models of up to 20B parameters, such as facebook/opt-6.7b, EleutherAI/gpt-neox-20b, and google/flan-t5-xxl. For models beyond 20B parameters, trlX provides NVIDIA NeMo-backed trainers that leverage efficient parallelism techniques to scale effectively.

The following RL algorithms are currently implemented:

| Algorithm | Accelerate Trainer | NeMo Trainer | |-------------------------------------------------------------------------------|:------------------:|:-------------:| | Proximal Policy Optimization (PPO) | ✅ | ✅ | | Implicit Language Q-Learning (ILQL) | ✅ | ✅ |

📖 Documentation

🧀 CHEESE Collect human annotations for your RL application with our human-in-the-loop data collection library.

Installation

bash git clone https://github.com/CarperAI/trlx.git cd trlx pip install torch --extra-index-url https://download.pytorch.org/whl/cu118 pip install -e .

Examples

For more usage see examples. You can also try the colab notebooks below: | Description | Link | | ----------- | ----------- | | Simulacra (GPT2, ILQL) | Open In Colab| | Sentiment (GPT2, ILQL) | Open In Colab|

Latest runs of the examples are on our Weights & Biases

How to Train

You can train a model using a reward function or a reward-labeled dataset.

Using a reward function

python trainer = trlx.train('gpt2', reward_fn=lambda samples, **kwargs: [sample.count('cats') for sample in samples])

For reward model training refer to our autocrit library.

Using a reward-labeled dataset

python trainer = trlx.train('EleutherAI/gpt-j-6B', samples=['dolphins', 'geese'], rewards=[1.0, 100.0])

Using a prompt-completion dataset

python trainer = trlx.train('gpt2', samples=[['Question: 1 + 2 Answer:', '3'], ['Question: Solve this equation: ∀n>0, s=2, sum(n ** -s). Answer:', '(pi ** 2)/ 6']])

Trainers provide a wrapper over their underlying model

python trainer.generate(**tokenizer('Q: Who rules the world? A:', return_tensors='pt'), do_sample=True)

Configure Hyperparameters

```python from trlx.data.defaultconfigs import defaultppo_config

config = defaultppoconfig() config.model.modelpath = 'EleutherAI/gpt-neox-20b' config.tokenizer.tokenizerpath = 'EleutherAI/gpt-neox-20b' config.train.seq_length = 2048

trainer = trlx.train(config=config, reward_fn=lambda samples, **kwargs: [len(sample) for sample in samples]) To reduce memory usage (if you're experiencing CUDA Out of Memory errors), first try the lowest setting for the following hyperparameters and eventually increase them: python

micro batch size per gpu

config.train.batch_size = 1

freeze all transformer layers

config.model.numlayersunfrozen = 0

maximum sample length, prompts or samples longer than that will be truncated

config.train.seq_length = 128

micro batch size for sampling (specific for PPO)

config.method.chunk_size = 1

use an additional Q-head (specific for ILQL)

config.method.two_qs = False ```

Save the resulting model to a Hugging Face pretrained language model. (Ready to upload to the Hub!)

python trainer.save_pretrained('/path/to/output/folder/')

Use 🤗 Accelerate to launch distributed training

bash accelerate config # choose DeepSpeed option accelerate launch examples/simulacra.py

Use NeMo-Megatron to launch distributed training

Follow the setup instructions in the NeMo README.

bash python examples/nemo_ilql_sentiments.py

For more usage see the NeMo README

Use Ray Tune to launch hyperparameter sweep

bash ray start --head --port=6379 python -m trlx.sweep --config configs/sweeps/ppo_sweep.yml --accelerate_config configs/accelerate/ddp.yaml --num_gpus 4 examples/ppo_sentiments.py

Benchmark your trlX fork against trlX's main branch

bash python -m trlx.reference octocat/trlx-fork:fix-branch

Logging

trlX uses the standard Python logging library to log training information to the console. The default logger is set to the INFO level, which means that INFO, WARNING, ERROR, and CRITICAL level messages will be printed to standard output.

To change the log level directly, you can use the verbosity setter. For example, to set the log level to WARNING use:

```python import trlx

trlx.logging.set_verbosity(trlx.logging.WARNING) ```

This will suppress INFO level messages, but still print WARNING, ERROR, and CRITICAL level messages.

You can also control logging verbosity by setting the TRLX_VERBOSITY environment variable to one of the standard logging level names:

  • CRITICAL (trlx.logging.CRITICAL)
  • ERROR (trlx.logging.ERROR)
  • WARNING (trlx.logging.WARNING)
  • INFO (trlx.logging.INFO)
  • DEBUG (trlx.logging.DEBUG)

sh export TRLX_VERBOSITY=WARNING

By default, tqdm progress bars are used to display training progress. You can disable them by calling trlx.logging.disable_progress_bar(), otherwise trlx.logging.enable_progress_bar() to enable.

Messages can be formatted with greater detail by setting trlx.logging.enable_explicit_format(). This will inject call-site information into each log which may be helpful for debugging.

sh [2023-01-01 05:00:00,000] [INFO] [ppo_orchestrator.py:63:make_experience] [RANK 0] Message...

💡 Tip: To reduce the amount of logging output, you might find it helpful to change log levels of third-party libraries used by trlX. For example, try adding transformers.logging.set_verbosity_error() to the top of your trlX scripts to silence verbose messages from the transformers library (see their logging docs for more details).

Contributing

For development check out these guidelines and also read our docs

Citing trlX

@inproceedings{havrilla-etal-2023-trlx, title = "trl{X}: A Framework for Large Scale Reinforcement Learning from Human Feedback", author = "Havrilla, Alexander and Zhuravinskyi, Maksym and Phung, Duy and Tiwari, Aman and Tow, Jonathan and Biderman, Stella and Anthony, Quentin and Castricato, Louis", booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.emnlp-main.530", doi = "10.18653/v1/2023.emnlp-main.530", pages = "8578--8595", }

Acknowledgements

Many thanks to Leandro von Werra for contributing with trl, a library that initially inspired this repo.

Owner

  • Name: CarperAI
  • Login: CarperAI
  • Kind: organization

GitHub Events

Total
  • Issues event: 6
  • Watch event: 244
  • Issue comment event: 2
  • Pull request event: 2
  • Fork event: 26
Last Year
  • Issues event: 6
  • Watch event: 244
  • Issue comment event: 2
  • Pull request event: 2
  • Fork event: 26

Committers

Last synced: 11 months ago

All Time
  • Total Commits: 337
  • Total Committers: 57
  • Avg Commits per committer: 5.912
  • Development Distribution Score (DDS): 0.84
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
reciprocated 5****d 54
Jonathan Tow 4****w 51
leandro l****a@s****o 36
shahbuland s****1@g****m 34
Dahoas 3****s 34
cat-state 9****e 22
Max 5****e 14
Duy V. Phung p****3@g****m 7
Louis Castricato w****p@g****m 7
Alex a****x@g****r 5
Alan 4****y 5
Leandro von Werra l****a 4
dependabot[bot] 4****] 4
Ayush Thakur m****k@g****m 4
Alexey Bukhtiyarov a****v@y****u 3
Jingru n****u@h****m 3
aaronrmm a****m@g****m 3
alexandremuzio a****o@g****m 3
cOng e****g@q****m 3
Qing Wang k****8@g****m 2
Fabrizio Milo M****n 2
Mikael Johansson m****n@g****m 2
Alex a****x@g****r 1
Alex a****x@i****l 1
Louis l****s@i****l 1
Alain Le Noac'h 4****g 1
Chen9154 b****n@s****n 1
Chengxi Guo m****1@g****m 1
hzwer 5****6@1****m 1
crumb 5****b 1
and 27 more...

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 112
  • Total pull requests: 94
  • Average time to close issues: 2 months
  • Average time to close pull requests: 15 days
  • Total issue authors: 75
  • Total pull request authors: 27
  • Average comments per issue: 3.04
  • Average comments per pull request: 1.91
  • Merged pull requests: 67
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 7
  • Pull requests: 1
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 6
  • Pull request authors: 1
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • LouisCastricato (14)
  • cat-state (10)
  • Dahoas (5)
  • paulbricman (2)
  • jon-tow (2)
  • cvetanovskaa (2)
  • boblee22 (2)
  • mbalesni (2)
  • Adaickalavan (2)
  • sayan1101 (2)
  • heraldiclily (2)
  • AfraAmini (2)
  • amrzv (2)
  • akk-123 (1)
  • ethankim00 (1)
Pull Request Authors
  • maxreciprocate (21)
  • cat-state (12)
  • jon-tow (11)
  • Dahoas (9)
  • PhungVanDuy (5)
  • Jingru (5)
  • ayulockin (4)
  • LouisCastricato (4)
  • mrm8488 (2)
  • StellaAthena (2)
  • shahbuland (2)
  • mikljohansson (2)
  • MichaelEinhorn (1)
  • sandeepchittilla (1)
  • daia99 (1)
Top Labels
Issue Labels
bug (46) feature request (22) documentation (6)
Pull Request Labels

Packages

  • Total packages: 2
  • Total downloads: unknown
  • Total dependent packages: 0
    (may contain duplicates)
  • Total dependent repositories: 0
    (may contain duplicates)
  • Total versions: 6
proxy.golang.org: github.com/CarperAI/trlx
  • Versions: 3
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 6.5%
Average: 6.7%
Dependent repos count: 7.0%
Last synced: 6 months ago
proxy.golang.org: github.com/carperai/trlx
  • Versions: 3
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 6.5%
Average: 6.7%
Dependent repos count: 7.0%
Last synced: 6 months ago

Dependencies

requirements.txt pypi
  • accelerate ==0.12.0
  • datasets ==2.4.0
  • deepspeed ==0.7.3
  • einops ==0.4.1
  • numpy ==1.23.2
  • tqdm ==4.64.0
  • transformers ==4.21.2
  • wandb ==0.13.2
.github/workflows/build.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/code_quality.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
  • pre-commit/action v2.0.3 composite
.devcontainer/Dockerfile docker
  • nvidia/cuda 11.6.2-cudnn8-devel-ubuntu20.04@sha256 build
docs/requirements.txt pypi
  • accelerate ==0.12.0
  • datasets ==2.4.0
  • deepspeed ==0.7.3
  • einops ==0.4.1
  • numpy ==1.23.2
  • sphinx ==4.0.0
  • sphinx_rtd_theme *
  • torchtyping *
  • tqdm ==4.64.0
  • transformers ==4.21.2
  • wandb ==0.13.2
examples/summarize_rlhf/requirements.txt pypi
  • evaluate >=0.4.0
  • nltk >=3.8.1
  • rouge-score >=0.1.2
pyproject.toml pypi
setup.py pypi