commonsensevision

Open-world object detection using YOLOv8 + CLIP + LLaVA + GPT-based trait reasoning to detect objects based on intent and scene context.

https://github.com/ibrohimgets/commonsensevision

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (8.6%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Open-world object detection using YOLOv8 + CLIP + LLaVA + GPT-based trait reasoning to detect objects based on intent and scene context.

Basic Info
  • Host: GitHub
  • Owner: ibrohimgets
  • Language: Python
  • Default Branch: main
  • Size: 263 KB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created 10 months ago · Last pushed 10 months ago
Metadata Files
Readme Citation

README.md

🧠 Commonsense-Guided Open-World Object Detection

This project combines YOLOv8, CLIP, LLaVA, and GPT-4-style trait reasoning to detect objects based on user intent and scene context — even if the object is unseen during training.

Example: Prompting “I need something to write with” will match the trait group “pen/pencil” and detect it in the scene, even if it wasn’t part of YOLO’s original label set.


🔍 Pipeline Overview

  1. YOLOv8 generates bounding boxes.
  2. LLaVA provides a detailed image description.
  3. User prompt is mapped to object traits using SentenceTransformer.
  4. CLIP compares region crops with the intent-driven traits.
  5. Bounding boxes are filtered with CLIP + LLaVA + commonsense reasoning.

Architecture Diagram


🖼️ Example

📝 Prompt: I need something to write with
🔍 Matched group: pen/pencil
📦 Detected in image → pen with bounding box


📌 Inference & Evaluation

We provide an interactive pipeline for commonsense-driven object detection.
To run the full detection process with prompt input and image reasoning, execute:

```bash python main.py

@misc{iibrohimm2025commonsenseOD, title={Commonsense-Guided Open-World Object Detection Using LLMs and Visual-Semantic Reasoning}, author={Muminov, Ibrohim and Kim, Jihie}, howpublished={\url{https://github.com/ibrohimgets/CommonsenseVision}}, year={2025} }

Owner

  • Name: Ibrohim
  • Login: ibrohimgets
  • Kind: user
  • Location: Beijing, China
  • Company: Robotis

My name is Ibrohim Muminov. I'm a front-end developer who loves coding and I really enjoy making some new projects.

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
title: "Commonsense-Guided Open-World Object Detection"
authors:
  - family-names: Muminov
    given-names: Ibrohim
  - family-names: Kim
    given-names: Jihie
date-released: 2025-04-30
url: "https://github.com/ibrohimgets/CommonsenseVision"
version: "1.0.0"
repository-code: "https://github.com/ibrohimgets/CommonsenseVision"
license: MIT
keywords:
  - object detection
  - commonsense reasoning
  - YOLOv8
  - CLIP
  - LLaVA
  - vision-language
  - open-world AI

GitHub Events

Total
  • Push event: 9
  • Create event: 2
Last Year
  • Push event: 9
  • Create event: 2