https://github.com/924973292/awesome-multi-modal-object-re-identification

Welcome to the Awesome Multi-Modal Object Re-Identification Repository! This repository is dedicated to curating and sharing the latest methods, datasets, and resources focused specifically on the domain of multi-modal object re-identification. It brings together cutting-edge research, tools, and papers aimed at advancing the study and application.

https://github.com/924973292/awesome-multi-modal-object-re-identification

Science Score: 49.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, sciencedirect.com, springer.com, mdpi.com, ieee.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (5.2%) to scientific vocabulary

Keywords

awesome code-list missing-modal-retrieval multi-modal-learning multi-modal-object-re-identification paper-list person-reidentification vehicle-reidentification
Last synced: 6 months ago · JSON representation

Repository

Welcome to the Awesome Multi-Modal Object Re-Identification Repository! This repository is dedicated to curating and sharing the latest methods, datasets, and resources focused specifically on the domain of multi-modal object re-identification. It brings together cutting-edge research, tools, and papers aimed at advancing the study and application.

Basic Info
  • Host: GitHub
  • Owner: 924973292
  • License: mit
  • Default Branch: master
  • Homepage:
  • Size: 40 KB
Statistics
  • Stars: 73
  • Watchers: 2
  • Forks: 5
  • Open Issues: 0
  • Releases: 0
Topics
awesome code-list missing-modal-retrieval multi-modal-learning multi-modal-object-re-identification paper-list person-reidentification vehicle-reidentification
Created about 2 years ago · Last pushed 7 months ago
Metadata Files
Readme License

README.md

Awesome-Multi-Modal Object Re-Identification Repository

Welcome to the Awesome-Multi-Modal Object Re-Identification Repository! This repository is dedicated to curating and sharing cutting-edge methods and resources specifically focused on multi-modal object re-identification.

My Papers

  • [CVPR25-IDEA]
    IDEA: Inverted Text with Cooperative Deformable Aggregation for Multi-modal Object Re-Identification
    Paper Code
  • [AAAI25-DeMo]
    DeMo: Decoupled Feature-Based Mixture of Experts for Multi-Modal Object Re-Identification
    Paper Code
  • [AAAI25-MambaPro]
    MambaPro: Multi-Modal Object Re-identification with Mamba Aggregation and Synergistic Prompt
    Paper Code
  • [CVPR24-EDITOR]
    Magic Tokens: Select Diverse Tokens for Multi-modal Object Re-Identification
    Paper Code
  • [AAAI24-TOP-ReID]
    TOP-ReID: Multi-spectral Object Re-Identification with Token Permutation
    Paper Code

Multi-Modal ReID

Methods

Multi-Modal Object ReID

  • [TIP25-DESANet]
    Escaping Modal Interactions: An Efficient DESANet for Multi-Modal Object Re-identification
    Paper Code
  • [CSCWD25-LMCNet]
    Lightweight Multi-Branch Feature Complementary Network for Multi-Modal Object Re-Identification
    Paper
  • [ArXiv25-UGG-ReID]
    UGG-ReID: Uncertainty-Guided Graph Model for Multi-Modal Object Re-Identification
    Paper
  • [ICML25-MFRNet]
    Multi-Modal Object Re-Identification via Sparse Mixture-of-Experts
    Paper Code
  • [ArXiv25-NEXT]
    NEXT: Multi-Grained Mixture of Experts via Text-Modulation for Multi-Modal Object Re-ID
    Paper
  • [TMM25-ICPL]
    ICPL-ReID: Identity-Conditional Prompt Learning for Multi-Spectral Object Re-Identification
    Paper Code
  • [ArXiv25-MGRNet]
    Reliable Multi-Modal Object Re-Identification via Modality-Aware Graph Reasoning
    Paper
  • [WACV25-DMPT]
    DMPT: Decoupled Modality-Aware Prompt Tuning for Multi-Modal Object Re-Identification
    Paper
  • [TIP25-PromptMA]
    Prompt-Based Modality Alignment for Effective Multi-Modal Object Re-Identification
    Paper Code
  • [CVPR25-IDEA]
    IDEA: Inverted Text with Cooperative Deformable Aggregation for Multi-modal Object Re-Identification
    Paper Code
  • [ArXiv25]
    Modality Unified Attack for Omni-Modality Person Re-Identification
    Paper
  • [AAAI25-DeMo]
    DeMo: Decoupled Feature-Based Mixture of Experts for Multi-Modal Object Re-Identification
    Paper Code
  • [AAAI25-MambaPro]
    MambaPro: Multi-Modal Object Re-identification with Mamba Aggregation and Synergistic Prompt
    Paper Code
  • [TCSVT24-RSCNet]
    Representation Selective Coupling via Token Sparsification for Multi-Spectral Object Re-Identification
    Paper
  • [ESWA25-LRMM]
    LRMM: Low rank multi-scale multi-modal fusion for person re-identification based on RGB-NI-TI
    Paper
  • [Sensors24-MambaReID]
    MambaReID: Exploiting Vision Mamba for Multi-Modal Object Re-Identification
    Paper
  • [CVPR24-EDITOR]
    Magic Tokens: Select Diverse Tokens for Multi-modal Object Re-Identification
    Paper Code
  • [AAAI24-TOP-ReID]
    TOP-ReID: Multi-spectral Object Re-Identification with Token Permutation
    Paper Code
  • [AAAI24-HTT]
    Heterogeneous Test-Time Training for Multi-Modal Person Re-identifcation
    Paper Code
  • [NeurIPS23-UniCat]
    UniCat: Crafting a Stronger Fusion Baseline for Multimodal Re-Identification
    Paper Code
  • [arXiv23-GraFT]
    GraFT: Gradual Fusion Transformer for Multimodal Re-Identification
    Paper Code

Multi-Modal Person ReID

  • [TNNLS25-TIENet]
    TIENet: A Tri-Interaction Enhancement Network for Multimodal Person Reidentification
    Paper
  • [MLCCIM23-MMCF]
    Multimodal Consistency Co-Assisted Training for Person Re-Identification
    Paper
  • [ICSP23-LRFNet]
    Low-rank Fusion Network for Multi-modality Person Re-identification
    Paper
  • [TNNLS23-DENet]
    Dynamic Enhancement Network for Partial Multi-modality Person Re-identification
    Paper
  • [AAAI22-IEEE]
    Interact, Embed, and EnlargE: Boosting Modality-Specific Representations for Multi-Modal Person Re-identification
    Paper Code
  • [AAAI21-PFNet]
    Robust Multi-Modality Person Re-identification
    Paper

Multi-Modal Vehicle ReID

  • [IEEE Access25-SV2SAFA-V1]
    Swin Transformer With Late-Fusion Feature Aggregation for Multi-Modal Vehicle Reidentification
    Paper
  • [ArXiv25-CoEN]
    Collaborative Enhancement Network for Low-quality Multi-spectral Vehicle Re-identification
    Paper Code
  • [Applied Intelligence25]
    Generalizable Multi-spectral Vehicle Re-identification via Decoupled Subspaces
    Paper
  • [ESWA25-WTSF-ReID]
    Depth-driven Window-oriented Token Selection and Fusion for multi-modality vehicle re-identification with knowledge consistency constraint
    Paper Code
  • [Inform Fusion24-FACENet]
    Flare-aware cross-modal enhancement network for multi-spectral vehicle Re-identification
    Paper Code
  • [Sensors23-PHT]
    Progressively Hybrid Transformer for Multi-Modal Vehicle Re-Identification
    Paper
  • [TITS23-GPFNet]
    Graph-based progressive fusion network for multi-modality vehicle re-identification
    Paper
  • [Inform Fusion22-CCNet]
    Multi-spectral Vehicle Re-identification with Cross-directional Consistency Network and A High-quality Benchmark
    Paper Code
  • [ICSP22-GAFNet]
    Generative and attentive fusion for multi-spectral vehicle re-identification
    Paper
  • [AAAI20-HAMNet]
    Multi-Spectral Vehicle Re-Identification: A Challenge
    Paper Code

Datasets

Multi-Modal Person ReID

Star History

Star History Chart

Acknowledgments

I want to express my gratitude to the academic community and everyone contributing to the advancement of multi-modal object re-identification research.

Contact

Feel free to reach out if you have any questions, suggestions, or collaboration proposals:

Citation

If you find our work useful in your research, please consider citing our papers: ```bibtex @inproceedings{wang2024top, title={TOP-ReID: Multi-spectral Object Re-Identification with Token Permutation}, author={Wang, Yuhao and Liu, Xuehu and Zhang, Pingping and Lu, Hu and Tu, Zhengzheng and Lu, Huchuan}, booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, volume={38}, number={6}, pages={5758--5766}, year={2024} }

@InProceedings{Zhang2024CVPR, author = {Zhang, Pingping and Wang, Yuhao and Liu, Yang and Tu, Zhengzheng and Lu, Huchuan}, title = {Magic Tokens: Select Diverse Tokens for Multi-modal Object Re-Identification}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {17117-17126} }

@inproceedings{wang2025decoupled, title={Decoupled feature-based mixture of experts for multi-modal object re-identification}, author={Wang, Yuhao and Liu, Yang and Zheng, Aihua and Zhang, Pingping}, booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, volume={39}, number={8}, pages={8141--8149}, year={2025} }

@inproceedings{wang2025mambapro, title={Mambapro: Multi-modal object re-identification with mamba aggregation and synergistic prompt}, author={Wang, Yuhao and Liu, Xuehu and Yan, Tianyu and Liu, Yang and Zheng, Aihua and Zhang, Pingping and Lu, Huchuan}, booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, volume={39}, number={8}, pages={8150--8158}, year={2025} }

@article{wang2025idea, title={IDEA: Inverted Text with Cooperative Deformable Aggregation for Multi-modal Object Re-Identification}, author={Wang, Yuhao and Lv, Yongfeng and Zhang, Pingping and Lu, Huchuan}, journal={arXiv preprint arXiv:2503.10324}, year={2025} } ```

Owner

  • Name: Yuhao Wang
  • Login: 924973292
  • Kind: user
  • Location: Dalian
  • Company: Dalian University of Technology

生如芥子,心藏须弥

GitHub Events

Total
  • Watch event: 36
  • Push event: 29
  • Fork event: 4
Last Year
  • Watch event: 36
  • Push event: 29
  • Fork event: 4