transformer-heads

Toolkit for attaching, training, saving and loading of new heads for transformer models

https://github.com/center-for-humans-and-machines/transformer-heads

Science Score: 62.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
    Organization center-for-humans-and-machines has institutional domain (www.mpib-berlin.mpg.de)
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (15.6%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Toolkit for attaching, training, saving and loading of new heads for transformer models

Basic Info
Statistics
  • Stars: 285
  • Watchers: 6
  • Forks: 25
  • Open Issues: 1
  • Releases: 0
Created almost 2 years ago · Last pushed about 1 year ago
Metadata Files
Readme License Citation

README.md

Documentation | Getting Started | Reddit Post with more info

Transformer Heads

This library aims to be an allround toolkit for attaching, training, saving and loading of new heads for transformer models.
A new head could be: * A linear probe used to get an understanding of the information processing in a transformer architecture * A head to be finetuned jointly with the weights of a pretrained transformer model to perform a completely different kind of task. - E.g. a transformer pretrained to do causal language modelling could get a sequence classification head attached and be finetuned to do sentiment classification. - Or one could attach a regression head to turn a large language model into a value function for a reinforcement learning problem.

On top of that, attaching multiple heads at once can make multi-task learning easy, making it possible to train very general models.

Installation

Install from pypi: pip install transformer-heads.

Or, clone this repo and from the root of this repository: pip install -e .

Usage

Create head configurations python head_config = HeadConfig( name=f"imdb_head_3", layer_hook=-3, # Attach at the output of the third-to-last transformer-block in_size=hidden_size, output_activation="linear", pred_for_sequence=True, loss_fct="cross_entropy", num_outputs=2, target="label" # The name of the ground-truth column in the dataset ) Create a model with your head from a pretrained transformer model python model = load_headed( LlamaForCausalLM, "meta-llama/Llama-2-7b-hf", head_configs=[heads_config], ) Train you model using (for example) the simple to use huggingface Trainer interface: python trainer = Trainer( model, args=args, train_dataset=imdb_dataset["train"], data_collator=collator, )

For a more in-depth introduction and a fully working example, check the linear probe notebook.

Explanation of approach for training a transformer value function with QLoRA

  • The Base Model
    • The value model builds on a pre-trained base large language model.
    • That is, a transformer model trained on the causal language modelling objective on a large corpus of free flowing text
    • To solve the task, LLMs have a linear causal language modelling head that projects from the hidden dimension for each token to the number of tokens in the vocabulary.
    • The base model is not instruct tuned or trained by RLHF
  • Adding a value head
    • The causal language modelling head is removed.
    • It is replaced by a value head that projects from the hidden dimension for each token to a one-dimensional value prediction.
    • The value head may be linear or a small multilayer perceptron.
    • The value head is solving a regression task and is trained via the mean-squared-error loss.
  • Preparing for QLoRA training
    • QLoRA is desired to reduce memory-overhead and enable DDP training.
    • All weights from the model except the value-head are quantized and frozen.
    • LoRA weights are trained for all these frozen weights.
    • The value-head is still fully trained.

Joint training of multiple linear probes

_images/multi_linear_probe.svg

Notebooks

This repository contains multiple jupyter notebooks for a tutorial/illustration of how do do certain things with this library. Here is an overview of which notebook you should check out depending on the use you are interested in. * Linear Probes (understanding the inner workings of transformers) - Basic example with one probe for causal LM: notebooks/gpt2/linear_probe.ipynb - Train many probes for causal LM at once: notebooks/gpt2/multilinearprobe.ipynb - Train many probes for text classification at once: notebooks/gpt2/textclassificationlinear_probe.ipynb * Finetuning on a new type of task (with a new head) - QLoRA: notebooks/gpt2/textclassificationqlora.ipynb - Full finetuning: notebooks/gpt2/textclassificationfull_finetune.ipynb * Joint multi-task learning - Many heads doing completely different tasks + QLoRA, all trained at the same time: notebooks/gpt2/jointmultitasklearning.ipynb * Regression with pretrained transformers - Check the regression heads of this notebook: notebooks/gpt2/jointmultitasklearning.ipynb * Saving and loading - Notebook: notebooks/gpt2/savingandloading.ipynb - Tests: transformerheads/tests/testload_model.py

Joint multi-task training with different types of heads and QLoRA.

_images/example_architecture.svg

More custom loss functions and models

At the state of writing, only a subset of loss functions are supported out of the box. Check transformer_heads/constants.py for more up to date info.

However, it is not so hard to add/use different loss functions/models. You'll just need to add their respective information to loss_fct_map and model_type_map. Just import from transformer_heads.constants. To add a loss function, add a mapping from string to torch class. To add a model add a mapping from model type to a 2 tuple out of attribute name of the base model in the Model Class and Base model class. That may sound confusing, but what that means is just the following:

```python from transformerheads.constants import modeltypemap, lossfct_map import torch.nn as nn from transformers import MistralModel

lossfctmap["bce"] = nn.BCELoss() modeltypemap["mistral"] = ("model",MistralModel) ```

Can my transformer architecture be supported?

One of the basic assumtions of my library is that there is a transformer class such as the LlamaForCausalLM class of huggingface that has an attribute pointing to a base model that outputs raw hidden state. If your transformers model is built up in a similar way, adding support may be as easy as adding an entry to the modeltypemap with the name of the attribute and the class of the base model. You can either do that by importing from constants.py or by adding it directly and creating a pull request.

Q&A

  • Is Llama-3 supported? YES! Check here
  • How do I use my model for inference? Check the notebooks or this issue to get started.

Owner

  • Name: Center for Humans & Machines
  • Login: center-for-humans-and-machines
  • Kind: organization
  • Location: Germany

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - family-names: Keller
    given-names: Yannik
    orcid: https://orcid.org/0000-0002-2821-4313
    affiliation: >-
      Max Planck Institute for Human Development, Center for
      Humans and Machines
title: "Transformer Heads"
version: 0.1.3
date-released: 2024-03-11

GitHub Events

Total
  • Issues event: 27
  • Watch event: 44
  • Delete event: 1
  • Issue comment event: 36
  • Push event: 11
  • Pull request event: 4
  • Fork event: 4
  • Create event: 1
Last Year
  • Issues event: 27
  • Watch event: 44
  • Delete event: 1
  • Issue comment event: 36
  • Push event: 11
  • Pull request event: 4
  • Fork event: 4
  • Create event: 1

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 8
  • Total pull requests: 2
  • Average time to close issues: 12 days
  • Average time to close pull requests: 9 days
  • Total issue authors: 5
  • Total pull request authors: 2
  • Average comments per issue: 1.0
  • Average comments per pull request: 0.5
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 8
  • Pull requests: 2
  • Average time to close issues: 12 days
  • Average time to close pull requests: 9 days
  • Issue authors: 5
  • Pull request authors: 2
  • Average comments per issue: 1.0
  • Average comments per pull request: 0.5
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • ArchchanaKugathasan (6)
  • smirkun (3)
  • ArchSid (2)
  • mightymai (1)
  • gururise (1)
  • pySilver (1)
  • cccsurrey (1)
  • martinzlocha (1)
  • bingwork (1)
  • rushabh31 (1)
  • zthsk (1)
  • kristosh (1)
  • t-shoemaker (1)
Pull Request Authors
  • yannikkellerde (1)
  • blackplane (1)
  • cccsurrey (1)
  • mandeep511 (1)
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 517 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 22
  • Total maintainers: 1
pypi.org: transformer-heads

Attach custom heads to transformer models.

  • Versions: 22
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 517 Last month
Rankings
Dependent packages count: 9.7%
Average: 36.7%
Dependent repos count: 63.7%
Maintainers (1)
Last synced: 7 months ago

Dependencies

.github/workflows/publish_to_pypi.yml actions
  • actions/checkout v4 composite
  • actions/setup-python v3 composite
  • pypa/gh-action-pypi-publish release/v1 composite
pyproject.toml pypi
  • bitsandbytes *
  • pandas *
  • peft *
  • torch *
  • tqdm *
  • transformers *