commonsensevision
Open-world object detection using YOLOv8 + CLIP + LLaVA + GPT-based trait reasoning to detect objects based on intent and scene context.
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (8.6%) to scientific vocabulary
Repository
Open-world object detection using YOLOv8 + CLIP + LLaVA + GPT-based trait reasoning to detect objects based on intent and scene context.
Basic Info
- Host: GitHub
- Owner: ibrohimgets
- Language: Python
- Default Branch: main
- Size: 263 KB
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
🧠 Commonsense-Guided Open-World Object Detection
This project combines YOLOv8, CLIP, LLaVA, and GPT-4-style trait reasoning to detect objects based on user intent and scene context — even if the object is unseen during training.
Example: Prompting “I need something to write with” will match the trait group “pen/pencil” and detect it in the scene, even if it wasn’t part of YOLO’s original label set.
🔍 Pipeline Overview
- YOLOv8 generates bounding boxes.
- LLaVA provides a detailed image description.
- User prompt is mapped to object traits using SentenceTransformer.
- CLIP compares region crops with the intent-driven traits.
- Bounding boxes are filtered with CLIP + LLaVA + commonsense reasoning.
🖼️ Example
📝 Prompt: I need something to write with
🔍 Matched group: pen/pencil
📦 Detected in image → pen with bounding box
📌 Inference & Evaluation
We provide an interactive pipeline for commonsense-driven object detection.
To run the full detection process with prompt input and image reasoning, execute:
```bash python main.py
@misc{iibrohimm2025commonsenseOD, title={Commonsense-Guided Open-World Object Detection Using LLMs and Visual-Semantic Reasoning}, author={Muminov, Ibrohim and Kim, Jihie}, howpublished={\url{https://github.com/ibrohimgets/CommonsenseVision}}, year={2025} }
Owner
- Name: Ibrohim
- Login: ibrohimgets
- Kind: user
- Location: Beijing, China
- Company: Robotis
- Repositories: 1
- Profile: https://github.com/ibrohimgets
My name is Ibrohim Muminov. I'm a front-end developer who loves coding and I really enjoy making some new projects.
Citation (CITATION.cff)
cff-version: 1.2.0
message: "If you use this software, please cite it as below."
title: "Commonsense-Guided Open-World Object Detection"
authors:
- family-names: Muminov
given-names: Ibrohim
- family-names: Kim
given-names: Jihie
date-released: 2025-04-30
url: "https://github.com/ibrohimgets/CommonsenseVision"
version: "1.0.0"
repository-code: "https://github.com/ibrohimgets/CommonsenseVision"
license: MIT
keywords:
- object detection
- commonsense reasoning
- YOLOv8
- CLIP
- LLaVA
- vision-language
- open-world AI
GitHub Events
Total
- Push event: 9
- Create event: 2
Last Year
- Push event: 9
- Create event: 2