https://github.com/cyberagentailab/filtered-dpo
Introducing Filtered Direct Preference Optimization (fDPO) that enhances language model alignment with human preferences by discarding lower-quality samples compared to those generated by the learning model
Science Score: 23.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
○.zenodo.json file
-
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.4%) to scientific vocabulary
Keywords
Repository
Introducing Filtered Direct Preference Optimization (fDPO) that enhances language model alignment with human preferences by discarding lower-quality samples compared to those generated by the learning model
Basic Info
- Host: GitHub
- Owner: CyberAgentAILab
- License: mit
- Language: Jupyter Notebook
- Default Branch: main
- Homepage: https://arxiv.org/abs/2404.13846
- Size: 105 KB
Statistics
- Stars: 10
- Watchers: 0
- Forks: 1
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
Filtered Direct Preference Optimization
tl;dr
Introducing Filtered Direct Preference Optimization (fDPO) that enhances language model alignment with human preferences by discarding lower-quality samples compared to those generated by the learning model
Prerequisites
Get Started
To set up your local environment, start by copying the example environment file:
shell
cp .env.example .env
Next, you need to edit the .env file to include your Hugging Face API token. Replace the placeholder value with your actual token:
HF_HUB_TOKEN="your_hugging_face_token_here"
If you do not already have a Hugging Face account or API token, you will need to create an account on Hugging Face and then generate an API token from your account settings.
Once your .env file is set up, apply the configuration to your environment using direnv:
shell
direnv allow .
Installation
shell
poetry install
Obtain Access to Datasets and Models
To use the datasets and models listed below, you must apply for access privileges on their respective Hugging Face repository pages. Please follow the links provided, and on each page, click the “Apply” button to submit your access request. This process is necessary to ensure compliance with the data usage policies and intellectual property rights associated with each resource.
- Dataset - Follow this link to apply for access to the dataset.
- Model - Follow this link to apply for access to the model.
Usage
Test training
Execution time of about an hour in the notebook.
bash scripts/test.sh
Train 160m model
Execution time of several hours using A100 80G ```
$seed in {1, 2, 3}
seed=1 bash scripts/160m/fdpo_mix.sh ${seed} ```
Train 1.4b model
Execution time of about a day using A100 80G ```
$seed in {1, 2, 3}
seed=1 bash scripts/1.4b/fdpo_mix.sh ${seed} ```
Checking Experimental Results
The verification of experiment logs and creation of reports follow the standard of Transformers .
Also, a notebook for reproducing Figure 6 in our paper is provided in notebook
Reference
Bibtex:
@inproceedings{morimura-etal-2024-filtered,
title = "Filtered Direct Preference Optimization",
author = "Morimura, Tetsuro and
Sakamoto, Mitsuki and
Jinnai, Yuu and
Abe, Kenshi and
Ariu, Kaito",
editor = "Al-Onaizan, Yaser and
Bansal, Mohit and
Chen, Yun-Nung",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
month = nov,
year = "2024",
address = "Miami, Florida, USA",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.emnlp-main.1266",
pages = "22729--22770",
}
Owner
- Name: CyberAgent AI Lab
- Login: CyberAgentAILab
- Kind: organization
- Location: Japan
- Website: https://cyberagent.ai/ailab/
- Twitter: cyberagent_ai
- Repositories: 7
- Profile: https://github.com/CyberAgentAILab
GitHub Events
Total
- Watch event: 8
- Push event: 3
- Fork event: 1
Last Year
- Watch event: 8
- Push event: 3
- Fork event: 1
Dependencies
- absl-py 2.1.0
- accelerate 0.29.2
- aiohttp 3.9.4
- aiosignal 1.3.1
- async-timeout 4.0.3
- attrs 23.2.0
- black 24.3.0
- certifi 2024.2.2
- charset-normalizer 3.3.2
- click 8.1.7
- colorama 0.4.6
- datasets 2.18.0
- dill 0.3.8
- docstring-parser 0.16
- filelock 3.13.4
- frozenlist 1.4.1
- fsspec 2024.2.0
- grpcio 1.62.1
- huggingface-hub 0.22.2
- idna 3.7
- isort 5.13.2
- jinja2 3.1.3
- markdown 3.6
- markdown-it-py 3.0.0
- markupsafe 2.1.5
- mdurl 0.1.2
- mpmath 1.3.0
- multidict 6.0.5
- multiprocess 0.70.16
- mypy-extensions 1.0.0
- networkx 3.3
- numpy 1.26.4
- nvidia-cublas-cu12 12.1.3.1
- nvidia-cuda-cupti-cu12 12.1.105
- nvidia-cuda-nvrtc-cu12 12.1.105
- nvidia-cuda-runtime-cu12 12.1.105
- nvidia-cudnn-cu12 8.9.2.26
- nvidia-cufft-cu12 11.0.2.54
- nvidia-curand-cu12 10.3.2.106
- nvidia-cusolver-cu12 11.4.5.107
- nvidia-cusparse-cu12 12.1.0.106
- nvidia-nccl-cu12 2.19.3
- nvidia-nvjitlink-cu12 12.4.127
- nvidia-nvtx-cu12 12.1.105
- packaging 24.0
- pandas 2.2.2
- pathspec 0.12.1
- platformdirs 4.2.0
- protobuf 5.26.1
- psutil 5.9.8
- pyarrow 15.0.2
- pyarrow-hotfix 0.6
- pygments 2.17.2
- python-dateutil 2.9.0.post0
- pytz 2024.1
- pyyaml 6.0.1
- regex 2023.12.25
- requests 2.31.0
- rich 13.7.1
- safetensors 0.4.2
- setuptools 69.5.1
- shtab 1.7.1
- six 1.16.0
- sympy 1.12
- tensorboard 2.16.2
- tensorboard-data-server 0.7.2
- tensorboardx 2.6.2.2
- tokenizers 0.15.2
- tomli 2.0.1
- torch 2.2.2
- tqdm 4.66.2
- transformers 4.36.2
- triton 2.2.0
- trl 0.7.4
- typing-extensions 4.11.0
- tyro 0.8.3
- tzdata 2024.1
- urllib3 2.2.1
- werkzeug 3.0.2
- xxhash 3.4.1
- yarl 1.9.4
- black ^24.3.0 develop
- isort ^5.13.2 develop
- python ^3.10
- tensorboard ^2.16.2
- tensorboardx ^2.6.2.2
- transformers 4.36.2
- trl 0.7.4