adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (15.4%) to scientific vocabulary
Keywords
Keywords from Contributors
Repository
A Unified Library for Parameter-Efficient and Modular Transfer Learning
Basic Info
- Host: GitHub
- Owner: adapter-hub
- License: apache-2.0
- Language: Python
- Default Branch: main
- Homepage: https://docs.adapterhub.ml
- Size: 96.9 MB
Statistics
- Stars: 2,760
- Watchers: 28
- Forks: 369
- Open Issues: 50
- Releases: 25
Topics
Metadata Files
README.md
Adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
Website • Documentation • Paper
Adapters is an add-on library to HuggingFace's Transformers, integrating 10+ adapter methods into 20+ state-of-the-art Transformer models with minimal coding overhead for training and inference.
Adapters provides a unified interface for efficient fine-tuning and modular transfer learning, supporting a myriad of features like full-precision or quantized training (e.g. Q-LoRA, Q-Bottleneck Adapters, or Q-PrefixTuning), adapter merging via task arithmetics or the composition of multiple adapters via composition blocks, allowing advanced research in parameter-efficient transfer learning for NLP tasks.
Note: The Adapters library has replaced the
adapter-transformerspackage. All previously trained adapters are compatible with the new library. For transitioning, please read: https://docs.adapterhub.ml/transitioning.html.
Installation
adapters currently supports Python 3.9+ and PyTorch 2.0+.
After installing PyTorch, you can install adapters from PyPI ...
pip install -U adapters
... or from source by cloning the repository:
git clone https://github.com/adapter-hub/adapters.git
cd adapters
pip install .
Quick Tour
Load pre-trained adapters:
```python from adapters import AutoAdapterModel from transformers import AutoTokenizer
model = AutoAdapterModel.frompretrained("roberta-base") tokenizer = AutoTokenizer.frompretrained("roberta-base")
model.loadadapter("AdapterHub/roberta-base-pf-imdb", source="hf", setactive=True)
print(model(**tokenizer("This works great!", return_tensors="pt")).logits) ```
Adapt existing model setups:
```python import adapters from transformers import AutoModelForSequenceClassification
model = AutoModelForSequenceClassification.from_pretrained("t5-base")
adapters.init(model)
model.addadapter("myloraadapter", config="lora") model.trainadapter("myloraadapter")
Your regular training loop...
```
Flexibly configure adapters:
```python from adapters import ConfigUnion, PrefixTuningConfig, ParBnConfig, AutoAdapterModel
model = AutoAdapterModel.from_pretrained("microsoft/deberta-v3-base")
adapterconfig = ConfigUnion( PrefixTuningConfig(prefixlength=20), ParBnConfig(reductionfactor=4), ) model.addadapter("myadapter", config=adapterconfig, set_active=True) ```
Easily compose adapters in a single model:
```python from adapters import AdapterSetup, AutoAdapterModel import adapters.composition as ac
model = AutoAdapterModel.from_pretrained("roberta-base")
qc = model.loadadapter("AdapterHub/roberta-base-pf-trec") sent = model.loadadapter("AdapterHub/roberta-base-pf-imdb")
with AdapterSetup(ac.Parallel(qc, sent)): print(model(**tokenizer("What is AdapterHub?", return_tensors="pt"))) ```
Useful Resources
HuggingFace's great documentation on getting started with Transformers can be found here. adapters is fully compatible with Transformers.
To get started with adapters, refer to these locations:
- Colab notebook tutorials, a series notebooks providing an introduction to all the main concepts of (adapter-)transformers and AdapterHub
- https://docs.adapterhub.ml, our documentation on training and using adapters with adapters
- https://adapterhub.ml to explore available pre-trained adapter modules and share your own adapters
- Examples folder of this repository containing HuggingFace's example training scripts, many adapted for training adapters
Implemented Methods
Currently, adapters integrates all architectures and methods listed below:
| Method | Paper(s) | Quick Links |
| --- | --- | --- |
| Bottleneck adapters | Houlsby et al. (2019)
Bapna and Firat (2019)
Steitz and Roth (2024) | Quickstart, Notebook |
| AdapterFusion | Pfeiffer et al. (2021) | Docs: Training, Notebook |
| MAD-X,
Invertible adapters | Pfeiffer et al. (2020) | Notebook |
| AdapterDrop | Rücklé et al. (2021) | Notebook |
| MAD-X 2.0,
Embedding training | Pfeiffer et al. (2021) | Docs: Embeddings, Notebook |
| Prefix Tuning | Li and Liang (2021) | Docs |
| Parallel adapters,
Mix-and-Match adapters | He et al. (2021) | Docs |
| Compacter | Mahabadi et al. (2021) | Docs |
| LoRA | Hu et al. (2021) | Docs |
| MTL-LoRA | Yang et al., 2024 | Docs |
| (IA)^3 | Liu et al. (2022) | Docs |
| Vera | Kopiczko et al., 2024 | Docs
| DoRA | Liu et al., 2024 | Docs
| UniPELT | Mao et al. (2022) | Docs |
| Prompt Tuning | Lester et al. (2021) | Docs |
| QLoRA | Dettmers et al. (2023) | Notebook |
| ReFT | Wu et al. (2024) | Docs |
| Adapter Task Arithmetics | Chronopoulou et al. (2023)
Zhang et al. (2023) | Docs, Notebook|
Supported Models
We currently support the PyTorch versions of all models listed on the Model Overview page in our documentation.
Developing & Contributing
To get started with developing on Adapters yourself and learn more about ways to contribute, please see https://docs.adapterhub.ml/contributing.html.
Citation
If you use Adapters in your work, please consider citing our library paper: Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning
@inproceedings{poth-etal-2023-adapters,
title = "Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning",
author = {Poth, Clifton and
Sterz, Hannah and
Paul, Indraneil and
Purkayastha, Sukannya and
Engl{\"a}nder, Leon and
Imhof, Timo and
Vuli{\'c}, Ivan and
Ruder, Sebastian and
Gurevych, Iryna and
Pfeiffer, Jonas},
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.13",
pages = "149--160",
}
Alternatively, for the predecessor adapter-transformers, the Hub infrastructure and adapters uploaded by the AdapterHub team, please consider citing our initial paper: AdapterHub: A Framework for Adapting Transformers
@inproceedings{pfeiffer2020AdapterHub,
title={AdapterHub: A Framework for Adapting Transformers},
author={Pfeiffer, Jonas and
R{\"u}ckl{\'e}, Andreas and
Poth, Clifton and
Kamath, Aishwarya and
Vuli{\'c}, Ivan and
Ruder, Sebastian and
Cho, Kyunghyun and
Gurevych, Iryna},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
pages={46--54},
year={2020}
}
Owner
- Name: AdapterHub
- Login: adapter-hub
- Kind: organization
- Location: Germany
- Website: https://AdapterHub.ml
- Twitter: AdapterHub
- Repositories: 5
- Profile: https://github.com/adapter-hub
Citation (CITATION.cff)
cff-version: "1.2.0"
date-released: 2023-11
message: "If you use this software, please cite it as below."
title: "Adapters: A Unified Library for Parameter-Efficient and
Modular Transfer Learning"
url: "https://github.com/Adapter-Hub/adapters"
authors:
- family-names: Poth
given-names: Clifton
- family-names: Sterz
given-names: Hannah
- family-names: Paul
given-names: Indraneil
- family-names: Purkayastha
given-names: Sukannya
- family-names: Engländer
given-names: Leon
- family-names: Imhof
given-names: Timo
- family-names: Vulić
given-names: Ivan
- family-names: Ruder
given-names: Sebastian
- family-names: Gurevych
given-names: Iryna
- family-names: Pfeiffer
given-names: Jonas
preferred-citation:
type: conference-paper
authors:
- family-names: Poth
given-names: Clifton
- family-names: Sterz
given-names: Hannah
- family-names: Paul
given-names: Indraneil
- family-names: Purkayastha
given-names: Sukannya
- family-names: Engländer
given-names: Leon
- family-names: Imhof
given-names: Timo
- family-names: Vulić
given-names: Ivan
- family-names: Ruder
given-names: Sebastian
- family-names: Gurevych
given-names: Iryna
- family-names: Pfeiffer
given-names: Jonas
booktitle: "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations"
month: 12
start: 149
end: 160
title: "Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning"
year: 2023
publisher: "Association for Computational Linguistics"
url: "https://aclanthology.org/2023.emnlp-demo.13"
address: "Singapore"
GitHub Events
Total
- Create event: 6
- Commit comment event: 2
- Release event: 4
- Issues event: 49
- Watch event: 186
- Delete event: 2
- Issue comment event: 87
- Push event: 54
- Pull request review comment event: 89
- Pull request review event: 109
- Pull request event: 83
- Fork event: 32
Last Year
- Create event: 6
- Commit comment event: 2
- Release event: 4
- Issues event: 49
- Watch event: 186
- Delete event: 2
- Issue comment event: 87
- Push event: 54
- Pull request review comment event: 89
- Pull request review event: 109
- Pull request event: 83
- Fork event: 32
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| calpt | c****t@m****e | 111 |
| Leon Engländer | l****r@g****m | 21 |
| TimoImhof | 6****f | 15 |
| Hannah Sterz | h****6@g****m | 13 |
| Julian Fong | 4****g | 7 |
| francois_ledoyen | l****s@g****m | 2 |
| Aditya Ranjan | a****5@g****m | 1 |
| Alex Yun | 3****p | 1 |
| Boris | k****y@g****m | 1 |
| FahadEbrahim | 6****m | 1 |
| Ikko Eltociear Ashimine | e****r@g****m | 1 |
| KorventennFR | 1****R | 1 |
| Stefan Schweter | s****n@s****t | 1 |
| TheoWeih | t****y@w****e | 1 |
| William Soto | 9****i | 1 |
| divyanshuaggarwal | d****l@g****m | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 79
- Total pull requests: 139
- Average time to close issues: 4 months
- Average time to close pull requests: 3 months
- Total issue authors: 54
- Total pull request authors: 24
- Average comments per issue: 2.46
- Average comments per pull request: 0.94
- Merged pull requests: 100
- Bot issues: 0
- Bot pull requests: 1
Past Year
- Issues: 28
- Pull requests: 73
- Average time to close issues: about 1 month
- Average time to close pull requests: 23 days
- Issue authors: 23
- Pull request authors: 10
- Average comments per issue: 1.32
- Average comments per pull request: 0.77
- Merged pull requests: 55
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- arueckle (7)
- rabeehkarimimahabadi (7)
- km5ar (5)
- calpt (5)
- FrLdy (4)
- FahadEbrahim (3)
- JoPfeiff (3)
- julian-fong (3)
- sbassam (2)
- prajjwal1 (2)
- swoldemichael (2)
- SSamDav (2)
- z-lai (2)
- mkgs210 (2)
- xplip (2)
Pull Request Authors
- calpt (104)
- TimoImhof (21)
- julian-fong (14)
- lenglaender (11)
- JoPfeiff (6)
- hSterz (6)
- FrLdy (4)
- devin-astrumu (2)
- Soham2000 (2)
- amitkumarj441 (2)
- divyanshuaggarwal (2)
- killershrimp (2)
- arueckle (2)
- xmarva (2)
- joao-alves97 (2)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 3
-
Total downloads:
- pypi 1,414,365 last-month
- Total docker downloads: 1,612
-
Total dependent packages: 2
(may contain duplicates) -
Total dependent repositories: 51
(may contain duplicates) - Total versions: 47
- Total maintainers: 3
pypi.org: adapter-transformers
Deprecated adapter-transformers package. Use adapters package instead.
- Homepage: https://github.com/adapter-hub/adapters
- Documentation: https://adapter-transformers.readthedocs.io/
- License: Apache
-
Latest release: 4.0.0
published over 1 year ago
Rankings
Maintainers (3)
proxy.golang.org: github.com/adapter-hub/adapters
- Documentation: https://pkg.go.dev/github.com/adapter-hub/adapters#section-documentation
- License: apache-2.0
-
Latest release: v1.2.0
published 9 months ago
Rankings
pypi.org: adapters
A Unified Library for Parameter-Efficient and Modular Transfer Learning
- Homepage: https://github.com/adapter-hub/adapters
- Documentation: https://adapters.readthedocs.io/
- License: Apache
-
Latest release: 1.2.0
published 9 months ago
Rankings
Maintainers (3)
Dependencies
- accelerate main test
- conllu * test
- datasets >=1.13.3 test
- elasticsearch * test
- faiss-cpu * test
- fire * test
- git-python ==1.0.3 test
- jiwer * test
- librosa * test
- matplotlib * test
- nltk * test
- pandas * test
- protobuf * test
- psutil * test
- pytest * test
- rouge-score * test
- sacrebleu >=1.4.12 test
- scikit-learn * test
- sentencepiece * test
- seqeval * test
- streamlit * test
- tensorboard * test
- tensorflow_datasets * test
- torchvision * test
- datasets >=1.14.0
- librosa *
- torch >=1.6
- torchaudio *
- datasets >=1.8.0
- torch >=1.5.0
- torchvision >=0.6.0
- conllu *
- datasets >=1.8.0
- torch >=1.3
- datasets >=1.8.0
- torch >=1.5.0
- torchvision >=0.6.0
- datasets >=1.8.0
- torch >=1.5.0
- torchvision >=0.6.0
- accelerate *
- datasets >=1.8.0
- protobuf *
- sentencepiece *
- torch >=1.3
- accelerate *
- protobuf *
- sentencepiece *
- torch >=1.3
- accelerate *
- datasets >=1.8.0
- torch >=1.3.0
- datasets >=2.0.0
- torch >=1.3
- accelerate >=0.5.0
- datasets >=1.12.0
- librosa *
- torch >=1.5
- torchaudio *
- datasets >=1.18.0
- jiwer *
- librosa *
- torch >=1.5
- torchaudio *
- accelerate *
- datasets >=1.8.0
- nltk *
- protobuf *
- py7zr *
- rouge-score *
- sentencepiece *
- torch >=1.3
- accelerate *
- datasets >=1.8.0
- protobuf *
- scikit-learn *
- scipy *
- sentencepiece *
- torch >=1.3
- protobuf *
- sentencepiece *
- torch >=1.3
- accelerate *
- datasets >=1.8.0
- seqeval *
- torch >=1.3
- accelerate *
- datasets >=1.8.0
- protobuf *
- py7zr *
- sacrebleu >=1.4.12
- sentencepiece *
- torch >=1.3
- deps *
- datasets ==1.8.0 test
- actions/checkout v3 composite
- actions/setup-python v2 composite
- peaceiris/actions-gh-pages v3 composite
- z0al/dependent-issues v1 composite
- actions/stale v6 composite
- actions/cache v2 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- nvidia/cuda 11.2.2-cudnn8-devel-ubuntu20.04 build
- ubuntu 18.04 build
- python 3.8 build
- nvidia/cuda 10.2-cudnn7-devel-ubuntu18.04 build
- $BASE_DOCKER_IMAGE latest build
- ubuntu 18.04 build
- nvcr.io/nvidia/pytorch 21.03-py3 build
- nvcr.io/nvidia/pytorch 21.03-py3 build
- nvidia/cuda 11.2.2-cudnn8-devel-ubuntu20.04 build
- google/cloud-sdk slim build
- ubuntu 18.04 build
- nvidia/cuda 11.2.2-cudnn8-devel-ubuntu20.04 build