linear-relational

Linear Relational Embeddings (LREs) and Linear Relational Concepts (LRCs) for LLMs in PyTorch

https://github.com/chanind/linear-relational

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.9%) to scientific vocabulary

Keywords

ai huggingface-transformers llms pytorch transformers
Last synced: 6 months ago · JSON representation ·

Repository

Linear Relational Embeddings (LREs) and Linear Relational Concepts (LRCs) for LLMs in PyTorch

Basic Info
Statistics
  • Stars: 8
  • Watchers: 2
  • Forks: 1
  • Open Issues: 0
  • Releases: 17
Topics
ai huggingface-transformers llms pytorch transformers
Created over 2 years ago · Last pushed over 1 year ago
Metadata Files
Readme Changelog License Citation

README.md

Linear-Relational

ci Codecov PyPI

Linear Relational Embeddings (LREs) and Linear Relational Concepts (LRCs) for LLMs using PyTorch and Huggingface Transformers.

Full docs: https://chanind.github.io/linear-relational

About

This library provides utilities and PyTorch modules for working with LREs and LRCs. LREs estimate the relation between a subject and object in a transformer language model (LM) as a linear map.

This library assumes you're working with sentences with a subject, relation, and object. For instance, in the sentence: "Lyon is located in the country of France" would have the subject "Lyon", relation "located in country", and object "France". A LRE models a relation like "located in country" as a linear map consisting of a weight matrix $W$ and a bias term $b$, so a LRE would map from the activations of the subject (Lyon) at layer $ls$ to the activations of the object (France) at layer $lo$. So:

$$ LRE(s) = W s + b $$

LREs can be inverted using a low-rank inverse, shown as $LRE^{\dagger}$, to estimate $s$ from $o$:

$$ LRE^{\dagger}(o) = W^{\dagger}(o - b) $$

Linear Relational Concepts (LRCs) represent a concept $(r, o)$ as a direction vector $v$ on subject tokens, and can act like a simple linear classifier. For instance, while a LRE can represent the relation "located in country", we could learn a LRC for "located in the country: France", "located in country: Germany", "located in country: China", etc... This is just the result from passing in an object activation into the inverse LRE equation above.

$$ v_{o} = W^{\dagger}(o - b) $$

For more information on LREs and LRCs, check out the following papers:

Installation

pip install linear-relational

Usage

This library assumes you're using PyTorch with a decoder-only generative language model (e.g. GPT, LLaMa, etc...), and a tokenizer from Huggingface.

Training a LRE

To train a LRE for a relation, first collect prompts which elicit the relation. We provide a Prompt class to represent this data, and a Trainer class to make training a LRE easy. Below, we train a LRE to represent the "located in country" relation.

```python from transformers import GPT2LMHeadModel, GPT2TokenizerFast from linear_relational import Prompt, Trainer

We load a generative LM from huggingface. The LMHead must be included.

model = GPT2LMHeadModel.frompretrained("gpt2") tokenizer = GPT2TokenizerFast.frompretrained("gpt2")

Prompts consist of text, an answer, and subject.

The subject must appear in the text. The answer

is what the model should respond with, and corresponds to the "object"

prompts = [ Prompt("Paris is located in the country of", "France", subject="Paris"), Prompt("Shanghai is located in the country of", "China", subject="Shanghai"), Prompt("Kyoto is located in the country of", "Japan", subject="Kyoto"), Prompt("San Jose is located in the country of", "Costa Rica", subject="San Jose"), ]

trainer = Trainer(model, tokenizer)

lre = trainer.trainlre( relation="located in country", subjectlayer=8, # subject layer must be before the object layer object_layer=10, prompts=prompts, ) ```

Working with a LRE

A LRE is a PyTorch module, so once a LRE is trained, we can use it to predict object activations from subject activations:

python object_acts_estimate = lre(subject_acts)

We can also create a low-rank estimate of the LRE:

python low_rank_lre = lre.to_low_rank(50) low_rank_obj_acts_estimate = low_rank_lre(subject_acts)

Finally we can invert the LRE:

python inv_lre = lre.invert(rank=50) subject_acts_estimate = inv_lre(object_acts)

Training LRCs for a relation

The Trainer can also create LRCs for a relation. Internally, this first create a LRE, inverts it, then generates LRCs from each object in the relation. Objects refer to the answers in the prompts, e.g. in the example above, "France" is an object, "Japan" is an object, etc...

```python from transformers import GPT2LMHeadModel, GPT2TokenizerFast from linear_relational import Prompt, Trainer

We load a generative LM from huggingface. The LMHead must be included.

model = GPT2LMHeadModel.frompretrained("gpt2") tokenizer = GPT2TokenizerFast.frompretrained("gpt2")

Prompts consist of text, an answer, and subject.

The subject must appear in the text. The answer

is what the model should respond with, and corresponds to the "object"

prompts = [ Prompt("Paris is located in the country of", "France", subject="Paris"), Prompt("Shanghai is located in the country of", "China", subject="Shanghai"), Prompt("Kyoto is located in the country of", "Japan", subject="Kyoto"), Prompt("San Jose is located in the country of", "Costa Rica", subject="San Jose"), ]

trainer = Trainer(model, tokenizer)

concepts = trainer.trainrelationconcepts( relation="located in country", subjectlayer=8, objectlayer=10, prompts=prompts, maxlretrainingsamples=10, invlre_rank=50, ) ```

Causal editing

Once we have LRCs trained, we can use them to perform causal edits while the model is running. For instance, we can perform a causal edit to make the model output that "Shanghai is located in the country of France" by subtracting the "located in country: China" concept from "Shanghai" and adding the "located in country: France" concept. We can use the CausalEditor class to perform these edits.

```python from linear_relational import CausalEditor

concepts = trainer.trainrelationconcepts(...)

editor = CausalEditor(model, tokenizer, concepts=concepts)

editedanswer = editor.swapsubjectconceptsandpredictgreedy( text="Shanghai is located in the country of", subject="Shanghai", removeconcept="located in country: China", addconcept="located in country: France", editsinglelayer=8, magnitudemultiplier=3.0, predictnumtokens=1, ) print(editedanswer) # " France" ```

Single-layer vs multi-layer edits

Above we performed a single-layer edit, only modifying subject activations at layer 8. However, we may want to perform an edit at all subject layers at the same time instead. To do this, we can pass edit_single_layer=False to editor.swap_subject_concepts_and_predict_greedy(). We should also reduce the magnitude_multiplier since now we're going to make the edit at every layer, if we use too large of a multiplier we'll drown out the rest of the activations in the model. The magnitude_multiplier is a hyperparam that requires tuning depending on the model being edited.

```python from linear_relational import CausalEditor

concepts = trainer.trainrelationconcepts(...)

editor = CausalEditor(model, tokenizer, concepts=concepts)

editedanswer = editor.swapsubjectconceptsandpredictgreedy( text="Shanghai is located in the country of", subject="Shanghai", removeconcept="located in country: China", addconcept="located in country: France", editsinglelayer=False, magnitudemultiplier=0.1, predictnumtokens=1, ) print(editedanswer) # " France" ```

Concept matching

We can use learned concepts (LRCs) to act like classifiers and match them against subject activations in sentences. We can use the ConceptMatcher class to do this matching.

```python from linear_relational import ConceptMatcher

concepts = trainer.trainrelationconcepts(...)

matcher = ConceptMatcher(model, tokenizer, concepts=concepts)

match_info = matcher.query("Beijing is a northern city", subject="Beijing")

print(matchinfo.bestmatch.concept) # located in country: China print(matchinfo.bestmatch.score) # 0.832 ```

Acknowledgements

This library is inspired by and uses modified code from the following excellent projects:

Contributing

Any contributions to improve this project are welcome! Please open an issue or pull request in this repo with any bugfixes / changes / improvements you have!

This project uses Black for code formatting, Flake8 for linting, and Pytest for tests. Make sure any changes you submit pass these code checks in your PR. If you have trouble getting these to run feel free to open a pull-request regardless and we can discuss further in the PR.

License

This code is released under a MIT license.

Citation

If you use this library in your work, please cite the following:

bibtex @article{chanin2023identifying, title={Identifying Linear Relational Concepts in Large Language Models}, author={David Chanin and Anthony Hunter and Oana-Maria Camburu}, journal={arXiv preprint arXiv:2311.08968}, year={2023} }

Owner

  • Name: David Chanin
  • Login: chanind
  • Kind: user
  • Location: London, UK
  • Company: UCL

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - family-names: "Chanin"
    given-names: "David"
  - family-names: "Hunter"
    given-names: "Anthony"
  - family-names: "Camburu"
    given-names: "Oana-Maria"
title: "Identifying Linear Relational Concepts in Large Language Models"
doi: 10.48550/arXiv.2311.08968
date-released: 2023-11-15
url: "https://arxiv.org/abs/2311.08968"

GitHub Events

Total
  • Watch event: 4
  • Fork event: 1
Last Year
  • Watch event: 4
  • Fork event: 1

Committers

Last synced: over 1 year ago

All Time
  • Total Commits: 73
  • Total Committers: 2
  • Avg Commits per committer: 36.5
  • Development Distribution Score (DDS): 0.233
Past Year
  • Commits: 73
  • Committers: 2
  • Avg Commits per committer: 36.5
  • Development Distribution Score (DDS): 0.233
Top Committers
Name Email Commits
David Chanin c****v@g****m 56
github-actions g****s@g****m 17
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 10 months ago

All Time
  • Total issues: 1
  • Total pull requests: 9
  • Average time to close issues: about 1 month
  • Average time to close pull requests: 9 minutes
  • Total issue authors: 1
  • Total pull request authors: 1
  • Average comments per issue: 1.0
  • Average comments per pull request: 0.44
  • Merged pull requests: 9
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 7
  • Average time to close issues: N/A
  • Average time to close pull requests: 10 minutes
  • Issue authors: 0
  • Pull request authors: 1
  • Average comments per issue: 0
  • Average comments per pull request: 0.57
  • Merged pull requests: 7
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • sdspieg (1)
Pull Request Authors
  • chanind (17)
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 151 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 17
  • Total maintainers: 1
pypi.org: linear-relational

A Python library for working with Linear Relational Embeddings (LREs) and Linear Relational Concepts (LRCs) for LLMs

  • Versions: 17
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 151 Last month
Rankings
Dependent packages count: 10.0%
Average: 38.8%
Dependent repos count: 67.5%
Maintainers (1)
Last synced: 6 months ago

Dependencies

.github/workflows/ci.yaml actions
  • actions/checkout v4 composite
  • actions/setup-python v4 composite
  • peaceiris/actions-gh-pages v3 composite
  • pypa/gh-action-pypi-publish release/v1 composite
  • python-semantic-release/python-semantic-release v8.0.7 composite
  • python-semantic-release/upload-to-gh-release main composite
  • snok/install-poetry v1 composite
pyproject.toml pypi
  • dataclasses-json ^0.6.2
  • python ^3.10
  • tqdm >=4.0.0
  • transformers ^4.35.2