mergevq
[CVPR] MergeVQ: A Unified Framework for Visual Generation and Representation with Token Merging and Quantization
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (7.5%) to scientific vocabulary
Keywords
Repository
[CVPR] MergeVQ: A Unified Framework for Visual Generation and Representation with Token Merging and Quantization
Basic Info
- Host: GitHub
- Owner: ApexGen-X
- License: apache-2.0
- Language: Python
- Default Branch: main
- Homepage: https://www.arxiv.org/abs/2504.00999v1
- Size: 9.75 MB
Statistics
- Stars: 31
- Watchers: 2
- Forks: 3
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
MergeVQ: A Unified Framework for Visual Generation and Representation with Token Merging and Quantization (CVPR 2025)
[Siyuan Li](https://lupin1998.github.io)1,3*, [Luyuan Zhang](https://openreview.net/profile?id=~Luyuan_Zhang1)2*, [Zedong Wang](https://jacky1128.github.io)4, [Juanxi Tian](https://tianshijing.github.io)3, [Cheng Tan](https://chengtan9907.github.io)1,3, [Zicheng Liu](https://pone7.github.io)1,3, [Chang Yu](https://openreview.net/profile?id=~Chang_Yu1)3, [Qingsong Xie](https://openreview.net/profile?id=~Qingsong_Xie1)5†, [Haoqian Wang](https://www.sigs.tsinghua.edu.cn/whq_en/main.htm)2, [Zhen Lei](http://www.cbsr.ia.ac.cn/users/zlei/)6,7,8† 1 Zhejiang University 2 Tsinghua University 3 Westlake University 4 HKUST 5 OPPO AI Center 6 CAIR, HKISI-CAS 7 MAIS CASIA 8 University of Chinese Academy of Sciences * Equal Contributions; † Corresponding Authors.Masked Image Modeling (MIM) with Vector Quantization (VQ) has achieved great success in both self-supervised pre-training and image generation. However, most existing methods struggle to address the trade-off in shared latent space for generation quality vs. representation learning and efficiency. To push the limits of this paradigm, we propose MergeVQ, which incorporates token merging techniques into VQ-based autoregressive generative models to bridge the gap between visual generation and representation learning in a unified architecture. During pre-training, MergeVQ decouples top-k semantics from latent space with a token merge module after self-attention blocks in the encoder for subsequent Look-up Free Quantization (LFQ) and global alignment and recovers their fine-grained details through cross-attention in the decoder for reconstruction. As for the second-stage generation, we introduce MergeAR, which performs KV Cache compression for efficient raster-order prediction. Experiments on ImageNet verify that MergeVQ as an AR generative model achieves competitive performance in both representation learning and image generation tasks while maintaining favorable token efficiency and inference speed.
🤗 HuggingFace Daily Papers Top-1: https://huggingface.co/papers/2504.00999
Catalog
We plan to release implementations of MergeVQ in a few months (before CVPR2025 taking place). Please watch us for the latest release and welcome to open issues for discussion! Currently, we have released the basic implementations of MergeVQ tokenizers.
📖 Implementations
🛠️ Installation
GPU
- Environments: We have tested on
Python3.10.0+torch2.1.0+cuda12.1, andPython 3.8.8+torch==1.3.0+cuda11.8, and other versions may also work. - Dependencies:
pip install -r requirements.txtHere is an example of installing withtorch2.1.0+cuda12.1from scratch:sh conda create -n mergevq python=3.10.0 conda activate mergevq pip install torch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 --index-url https://download.pytorch.org/whl/cu124 pip install -r requirements.txt
NPU
- Env:
Python 3.9.16andCANN 8.0.T13 - Main Dependencies:
torch=2.1.0+cpu+torch-npu=2.1.0.post3-20240523+Lightning - Other Dependencies: see in
requirements.txt
Datasets Preparation
We use ILSVRC2012 ImageNet with training set and validation set at the root, which could be downloaded as untared as follows:
.cache/imagenet
└── train/
├── n01440764
├── n01440764_10026.JPEG
├── n01440764_10027.JPEG
├── ...
├── n01443537
├── ...
└── val/
├── n01440764
├── n01443537
├── ...
When start training or evaluation, these files will be generated under .cache/imagenet/train and .cache/imagenet/val, including filelist.txt, imagenet_idx_to_synset.yaml, synset_human.txt, and validation_synset.txt. If you want to use a custom dataset or ImageNet at the other file path, please specify cachedir for taming.data.imagenet.ImageNetTrain in the training config file.
Pre-training Models
If you are not available to access https://huggingface.co/ smoothly, we have two solutions.
* Export to the mirror website (https://hf-mirror.com) and start training directly:
sh
export HF_ENDPOINT=https://hf-mirror.com
Manually download the following pre-trained models from the offical or mirror websites and copy them to the cache folder as follows, or modify the config file with the path of local huggingface models.
/root/.cache/huggingface/hub
└── models--facebook--dinov2-base
└── models--laion--CLIP-ViT-B-16-laion2B-s34B-b88K
└── models--timm--vit_base_patch14_dinov2.lvd142m
python
from timm import create_model
teacher_weights = create_model("vit_base_patch14_dinov2.lvd142m", pretrained=True).state_dict()
teacher_weights = create_model("vit_base_patch16_clip_224.laion2b", pretrained=True).state_dict()
from transformers import AutoModel
dist_model = AutoModel.from_pretrained("facebook/dinov2-base")
Stage I: Training of Visual Tokenizer
🚀 Training Scripts
$256\times 256$ MergeVQ-d64 (G+R) Tokenizer Training with multiple nodes:
sh bash scripts/train_tokenizer/run_256_GR_d64_multi.sh MASTER_ADDR MASTER_PORT NODE_RANKOr you can start training and evaluation on a single node, taking 8xA100-80G with a batch size of 16 and 2 times gradient accumulations as an example:sh bash scripts/train_tokenizer/run_256_GR_d64_single.sh$256\times 256$ MergeVQ-d96 (G+R) Tokenizer Training with multiple nodes:
sh bash scripts/train_tokenizer/run_256_GR_d96_multi.sh MASTER_ADDR MASTER_PORT NODE_RANKOr you can start training and evaluation on a single node, taking 8xA100-80G with a batch size of 16 and 2 times gradient accumulations as an example:sh bash scripts/train_tokenizer/run_256_GR_d96_single.sh$256\times 256$ MergeVQ-d64 (G) Tokenizer Training with multiple nodes:
sh bash scripts/train_tokenizer/run_256_G_d64_multi.sh MASTER_ADDR MASTER_PORT NODE_RANKOr you can start training and evaluation on a single node, taking 8xA100-80G with a batch size of 8 and 4 times gradient accumulations as an example:sh bash scripts/train_tokenizer/run_256_G_d64_single.sh
Evaluation Scripts
We gather evaluation scripts of experiments above into one bash file, which can be executed with modified path to config files, results, and checkpoints:
sh
bash scripts/evaluation/evaluation_mergevq.sh
Note of Errors
If the some errors occur during training, you may solve them with the following steps:
* The version of timm. The low version of timm like 0.6.13 will cause errors in building Transformer Blocks, which can be solved by pip install timm==0.9.11.
* Errors in building up ImageNet dataset. Although the meta files of ImageNet will be generated automatically, you might copy our preprocess meta files manually if it cannot be generated.
<!-- * The assertion error of accumulate_grad_batches from lightning. Since we manually use accumulate_grad_batches in config files to setup gradient accumulation, please replace the source file configuration_validator.py with our modified version in lightning.
sh
cp -r scripts/.modify_lightning/configuration_validator.py /root/anaconda3/envs/maskgit/lib/python3.10/site-packages/lightning/pytorch/trainer/configuration_validator
-->
🍺 Performance and Models (Updating)
Tokenizer | Method | Type | #Tokens | Train Size | Epoch | Codebook Size | rFID (Full) | rFID (Merge) | Checkpoint | |---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | Open-MAGVIT2 | 2D | $16^2$ | $256^2$ | 270 | 2^18 | 1.53 (256) | - | ckpt | | MergeVQ-d32 (G) | 1D | [256, 1024] | $256^2$ | 200 | 2^18 | 0.48 (1024) | 0.80 (256) | TODO | | MergeVQ-d64 (G) | 1D | [256, 1024] | $256^2$ | 100 | 2^18 | 0.49 (1024) | 0.91 (256) | TODO | | MergeVQ-d64 (G) | 1D | [256, 1024] | $256^2$ | 200 | 2^18 | 0.43 (1024) | 0.83 (256) | TODO | | MergeVQ-d32 (G+R) | 1D | [144, 256] | $256^2$ | 270 | 2^18 | 1.27 (256) | 1.74 (144) | TODO | | MergeVQ-d64 (G+R) | 1D | [144, 256] | $256^2$ | 270 | 2^18 | 1.12 (256) | 1.48 (144) | TODO | | MergeVQ-d96 (G+R) | 1D | [144, 256] | $256^2$ | 200 | 2^18 | 1.03 (256) | 1.33 (144) | TODO |
Stage II: Training of Auto-Regressive Models
🚀 Training Scripts
Please see in scripts/trainautogressive/run.sh for different model configurations. ``` bash scripts/trainautogressive/run.sh MASTERADDR MASTERPORT NODE_RANK ```
🚀 Sample Scripts
Please see in scripts/trainautogressive/run.sh for different sampling hyper-parameters for different scale of models. ``` bash scripts/evaluation/samplenpu.sh or scripts/evaluation/samplegpu.sh YourTotal_Rank ```
License
This project is released under the Apache 2.0 license.
Acknowledgement
Our implementation is mainly based on the following codebases. We gratefully thank the authors for their wonderful works.
- VQGAN: Taming Transformers for High-Resolution Image Synthesis.
- ToMe: Token Merging: Your ViT but Faster.
- LlamaGen: Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation.
- SEED-Voken (OpenMAGVIT2): SEED-Voken: A Series of Powerful Visual Tokenizers.
- pytorch-image-models: PyTorch image models, scripts, pretrained weights.
Citation
If you find this repository helpful, please consider citing:
@inproceedings{cvpr2025mergevq,
title={MergeVQ: A Unified Framework for Visual Generation and Representation with Disentangled Token Merging and Quantization},
author={Li, Siyuan and Zhang, Luyuan and Wang, Zedong and Tian, Juanxi and Tan, Cheng and Liu, Zicheng and Yu, Chang and Xie, Qingsong and Lu, Haonan and Wang, Haoqian and Lei, Zhen},
booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2025}
}
Owner
- Name: ApexGen
- Login: ApexGen-X
- Kind: organization
- Repositories: 1
- Profile: https://github.com/ApexGen-X
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this codebase, please cite it as below." authors: - family-names: "Luo" given-names: "Zhuoyan" - family-names: "Shi" given-names: "Fengyuan" - family-names: "Ge" given-names: "Yixiao" title: "Open-MAGVIT2" version: 1.0 date-released: 2024-06-14 url: "https://github.com/TencentARC/Open-MAGVIT2"
GitHub Events
Total
- Watch event: 34
- Member event: 2
- Public event: 1
- Push event: 29
- Fork event: 2
Last Year
- Watch event: 34
- Member event: 2
- Public event: 1
- Push event: 29
- Fork event: 2