046-forget-me-not-learning-to-forget-in-text-to-image-diffusion-models

https://github.com/szu-advtech-2024/046-forget-me-not-learning-to-forget-in-text-to-image-diffusion-models

Science Score: 31.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.2%) to scientific vocabulary
Last synced: 7 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: SZU-AdvTech-2024
  • Default Branch: main
  • Size: 0 Bytes
Statistics
  • Stars: 0
  • Watchers: 0
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created about 1 year ago · Last pushed about 1 year ago
Metadata Files
Citation

https://github.com/SZU-AdvTech-2024/046-Forget-Me-Not-Learning-to-Forget-in-Text-to-Image-Diffusion-Models/blob/main/

# Forget-Me-Not: Learning to Forget in Text-to-Image Diffusion Models
#### Features

- Forget-Me-Not is a plug-and-play, efficient and effective concept forgetting and correction method for large-scale text-to-image models.
- It provides an efficient way to forget specific concepts with as few as 35 optimization steps, which typically takes about 30 seconds.
- It can be easily adapted as lightweight patches for Stable Diffusion, allowing for multi-concept manipulation and convenient distribution.
- Novel attention re-steering loss demonstrates that pretrained models can be further finetuned solely with self-supervised signals, i.e. attention scores.

## Setup

```
conda create -n forget-me-not python=3.8
conda activate forget-me-not

pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu116

pip install -r requirements.txt
```

## Train
We provde an example of forgetting the identity of Elon Musk.

- First, train Ti of a concept. Optional, only needed if `use_ti: true` in `attn.yaml`.
```
python run.py configs/ti.yaml
```
- Second, use attention resteering to forget a concept.
```
python run.py configs/attn.yaml
```
- Results can be found in `exps_ti` and `exps_attn`.

## Empirical Guidance

- Modify `ti.yaml` to tune Ti. In practical, prompt templates, intializer tokens, the number of tokens all have influences on inverted tokens, thus affecting forgetting results.
- Modify `attn.yaml` to tune forgetting procedure. Concept and its type are specified under `multi_concept` as `[elon-musk, object]`. During training, `-` will be replaced with space as the plain text of the concept. A folder containing training images are assumed at `data` folder with the same name `elon-musk`. Set `use_ti` to use inverted tokens or plain text of a concept. Set `only_optimize_ca` to only tune cross attention layers. otherwise UNet will be tuned. Set `use_pooler` to include pooler token `<|endoftext|>` into attention resteering loss.
- To achieve the best results, tune hyperparameters such as `max_train_steps` and `learning_rate`. They can vary concept by concept.
- Use precise attention scores could be helpful, e.g. instead of using all pixels, segment out the pixels of a face, only using their attention scores for forgetting an identity.

Owner

  • Name: SZU-AdvTech-2024
  • Login: SZU-AdvTech-2024
  • Kind: organization

Citation (citation.txt)

@inproceedings{REPO046,
    author = "Zhang, Gong and Wang, Kai and Xu, Xingqian and Wang, Zhangyang and Shi, Humphrey",
    booktitle = "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition",
    pages = "1755--1764",
    title = "{Forget-me-not: Learning to forget in text-to-image diffusion models}",
    year = "2024"
}

GitHub Events

Total
  • Push event: 2
  • Create event: 3
Last Year
  • Push event: 2
  • Create event: 3