https://github.com/aim-uofa/matcher

[ICLR'24] Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching

https://github.com/aim-uofa/matcher

Science Score: 23.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org, scholar.google
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.7%) to scientific vocabulary

Keywords

dinov2 generalist-model in-context-segmentation matcher sam
Last synced: 6 months ago · JSON representation

Repository

[ICLR'24] Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching

Basic Info
Statistics
  • Stars: 494
  • Watchers: 27
  • Forks: 33
  • Open Issues: 24
  • Releases: 0
Topics
dinov2 generalist-model in-context-segmentation matcher sam
Created almost 3 years ago · Last pushed about 1 year ago
Metadata Files
Readme License

README.md

Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching

[Yang Liu](https://scholar.google.com/citations?user=9JcQ2hwAAAAJ&hl=en)1*,   [Muzhi Zhu](https://scholar.google.com/citations?user=064gBH4AAAAJ&hl=en)1*,   Hengtao Li1*,   [Hao Chen](https://stan-haochen.github.io/)1,   [Xinlong Wang](https://www.xloong.wang/)2,   [Chunhua Shen](https://cshen.github.io/)1 1[Zhejiang University](https://www.zju.edu.cn/english/),   2[Beijing Academy of Artificial Intelligence](https://www.baai.ac.cn/english.html) ICLR 2024

🚀 Overview

image

📖 Description

Powered by large-scale pre-training, vision foundation models exhibit significant potential in open-world image understanding. However, unlike large language models that excel at directly tackling various language tasks, vision foundation models require a task-specific model structure followed by fine-tuning on specific tasks. In this work, we present Matcher, a novel perception paradigm that utilizes off-the-shelf vision foundation models to address various perception tasks. Matcher can segment anything by using an in-context example without training. Additionally, we design three effective components within the Matcher framework to collaborate with these foundation models and unleash their full potential in diverse perception tasks. Matcher demonstrates impressive generalization performance across various segmentation tasks, all without training. Our visualization results further showcase the open-world generality and flexibility of Matcher when applied to images in the wild.

Paper

ℹ️ News

  • 2024.1 Matcher has been accepted to ICLR 2024!
  • 2024.1 Matcher supports Semantic-SAM for better part segmentation.
  • 2024.1 We provide a Gradio Demo.
  • 2024.1 Release code of one-shot semantic segmentation and one-shot part segmentation tasks.

📖 Recommanded Works

  • SINE: A Simple Image Segmentation Framework via In-Context Examples. GitHub.
  • DiffewS: Unleashing the Potential of the Diffusion Model in Few-shot Semantic Segmentation. GitHub. ## 🗓️ TODO
  • [x] Gradio Demo
  • [x] Release code of one-shot semantic segmentation and one-shot part segmentation tasks
  • [ ] Release code and models for VOS

🏗️ Installation

See installation instructions.

👻 Getting Started

See Preparing Datasets for Matcher.

See Getting Started with Matcher.

🖼️ Demo

One-Shot Semantic Segmantation

image

One-Shot Object Part Segmantation

image

Cross-Style Object and Object Part Segmentation

image

Controllable Mask Output

image

Video Object Segmentation

https://github.com/aim-uofa/Matcher/assets/119775808/9ff9502d-7d2a-43bc-a8ef-01235097d62b

🎫 License

For academic use, this project is licensed under the 2-clause BSD License. For commercial use, please contact Chunhua Shen.

🖊️ Citation

If you find this project useful in your research, please consider to cite:

BibTeX @article{liu2023matcher, title={Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching}, author={Liu, Yang and Zhu, Muzhi and Li, Hengtao and Chen, Hao and Wang, Xinlong and Shen, Chunhua}, journal={arXiv preprint arXiv:2305.13310}, year={2023} }

Acknowledgement

SAM, DINOv2, SegGPT, HSNet, Semantic-SAM and detectron2.

Owner

  • Name: Advanced Intelligent Machines (AIM)
  • Login: aim-uofa
  • Kind: organization
  • Location: China

A research team at Zhejiang University, focusing on Computer Vision and broad AI research ...

GitHub Events

Total
  • Issues event: 10
  • Watch event: 75
  • Issue comment event: 2
  • Push event: 1
  • Pull request event: 1
  • Fork event: 10
Last Year
  • Issues event: 10
  • Watch event: 75
  • Issue comment event: 2
  • Push event: 1
  • Pull request event: 1
  • Fork event: 10

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 33
  • Total Committers: 3
  • Avg Commits per committer: 11.0
  • Development Distribution Score (DDS): 0.152
Past Year
  • Commits: 5
  • Committers: 2
  • Avg Commits per committer: 2.5
  • Development Distribution Score (DDS): 0.2
Top Committers
Name Email Commits
yangliu y****0@g****m 28
Chunhua Shen 1****n 4
Z-MU-Z 2****2@q****m 1
Committer Domains (Top 20 + Academic)
qq.com: 1

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 36
  • Total pull requests: 4
  • Average time to close issues: 2 months
  • Average time to close pull requests: 1 minute
  • Total issue authors: 28
  • Total pull request authors: 3
  • Average comments per issue: 1.42
  • Average comments per pull request: 0.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 10
  • Pull requests: 2
  • Average time to close issues: 13 days
  • Average time to close pull requests: N/A
  • Issue authors: 8
  • Pull request authors: 1
  • Average comments per issue: 0.2
  • Average comments per pull request: 0.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • duoqingxiaowangzi (3)
  • dnnxl (2)
  • LIUYUANWEI98 (2)
  • Spritea (2)
  • needsee (2)
  • zzzyzh (2)
  • wzp8023391 (2)
  • NishaniKasineshan (1)
  • souxun2015 (1)
  • Skwarson96 (1)
  • miquel-espinosa (1)
  • isottongloria (1)
  • TritiumR (1)
  • paolopertino (1)
  • dbsdmlgus50 (1)
Pull Request Authors
  • fjchange (2)
  • paolopertino (2)
  • vincentme (2)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

dinov2/eval/setup.py pypi
requirements.txt pypi
  • POT ==0.9.0
  • future ==0.18.2
  • gradio ==3.32.0
  • gradio-client ==0.2.5
  • iopath *
  • matplotlib ==3.3.4
  • numpy ==1.22.0
  • omegaconf *
  • opencv-python ==4.6.0.66
  • timm ==0.6.12
  • torch ==1.13.1
  • torchmetrics ==0.11.0
  • torchshow ==0.5.0
  • torchvision ==0.14.1
  • tqdm ==4.64.1
semantic_sam/body/encoder/ops/setup.py pypi