object-detection-using-yolo5
Detect objects in images using YOLOv5, store results in PostgreSQL, and visualize detections with Matplotlib. 🚀 🔹 Run python object_detection.py after setup 🔹 Logs & database integration included
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
â—‹DOI references
-
â—‹Academic publication links
-
â—‹Academic email domains
-
â—‹Institutional organization owner
-
â—‹JOSS paper metadata
-
â—‹Scientific vocabulary similarity
Low similarity (14.5%) to scientific vocabulary
Repository
Detect objects in images using YOLOv5, store results in PostgreSQL, and visualize detections with Matplotlib. 🚀 🔹 Run python object_detection.py after setup 🔹 Logs & database integration included
Basic Info
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
Object Detection Using YOLOv5
This repository contains a Python script for performing object detection on images using the YOLOv5 model. The detected objects are saved to a PostgreSQL database, and the results can be visualized using Matplotlib. The project is designed to process images from a specified directory, detect objects using YOLOv5, and store the results in a structured database for further analysis.
Table of Contents
- Features
- Requirements
- Installation
- Usage
- Database Schema
- Visualization
- Logs and Monitoring
- Next Steps
- Contributing
- License
Features
- Object Detection: Uses YOLOv5 to detect objects in images.
- Database Integration: Saves detection results (class label, confidence, bounding box coordinates) to a PostgreSQL database.
- Visualization: Visualizes detected objects with bounding boxes and labels on the images.
- Logging: Detailed logging for monitoring and debugging.
- Modular Code: Well-structured and modular code for easy maintenance and extension.
Requirements
- Python 3.8 or higher
- PostgreSQL database
- Required Python libraries:
opencv-pythontorchtorchvisionsqlalchemymatplotlib
Installation
Clone the Repository:
bash git clone https://github.com/Azazh/Object-Detection-Using-YOLO5.git cd object-detection-yoloSet Up a Virtual Environment:
bash python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`Install Dependencies:
bash pip install -r requirements.txtSet Up PostgreSQL Database:
- Create a database named
medical_dw(or any name you prefer). - Update the
DB_CONNECTIONstring in the script with your database credentials:python DB_CONNECTION = "postgresql://username:password@localhost:5432/medical_dw"
- Create a database named
Download YOLOv5 Model: The script automatically downloads the YOLOv5 model using
torch.hub. Ensure you have an active internet connection during the first run.
Usage
Prepare Images:
- Place your images in the
../raw_data/mediadirectory (or update theIMAGE_DIRvariable in the script).
- Place your images in the
Run the object_detection script:
bash python object_detection.pyView Results:
- Detection results are saved in the
detection_resultstable in the PostgreSQL database. - A sample visualization of the first image is displayed using Matplotlib.
- Detection results are saved in the
Database Schema
The detection results are stored in the detection_results table with the following schema:
| Column | Type | Description |
|||--|
| id | Integer | Primary key (auto-increment). |
| image_path | String | Path to the image file. |
| class_label | String | Detected object class label. |
| confidence | Float | Confidence score of the detection. |
| x_min | Integer | Bounding box top-left x-coordinate. |
| y_min | Integer | Bounding box top-left y-coordinate. |
| x_max | Integer | Bounding box bottom-right x-coordinate. |
| y_max | Integer | Bounding box bottom-right y-coordinate. |
Visualization
The script includes a function to visualize the detected objects on the images. It draws bounding boxes and labels (with confidence scores) on the images using Matplotlib.
Example:

Logs and Monitoring
The script logs all activities to object_detection.log. You can monitor the logs for errors, warnings, and informational messages.
Example log:
2023-10-10 12:34:56,789 - INFO - Found 100 images for processing.
2023-10-10 12:35:10,123 - INFO - Detected 3 objects in ../raw_data/media/image1.jpg.
2023-10-10 12:35:15,456 - INFO - Saved 3 detections to database.
Next Steps
- Fine-tune YOLO: Train the YOLOv5 model on custom datasets for better accuracy.
- Extend Visualization: Add support for visualizing multiple images and saving visualizations to disk.
- Integrate with Data Warehouse: Combine detection results with other data sources for comprehensive analysis.
- Add Unit Tests: Write unit tests for critical functions like
detect_objectsandsave_to_db.
Contributing
Contributions are welcome! If you'd like to contribute, please follow these steps: 1. Fork the repository. 2. Create a new branch for your feature or bugfix. 3. Commit your changes. 4. Submit a pull request.
License
This project is licensed under the MIT License. See the LICENSE file for details.
Acknowledgments
- YOLOv5 by Ultralytics for the object detection model.
- SQLAlchemy for database operations.
- OpenCV for image processing.
- Matplotlib for visualization.
Owner
- Login: Azazh
- Kind: user
- Repositories: 1
- Profile: https://github.com/Azazh
Citation (CITATION.cff)
cff-version: 1.2.0
preferred-citation:
type: software
message: If you use YOLOv5, please cite it as below.
authors:
- family-names: Jocher
given-names: Glenn
orcid: "https://orcid.org/0000-0001-5950-6979"
title: "YOLOv5 by Ultralytics"
version: 7.0
doi: 10.5281/zenodo.3908559
date-released: 2020-5-29
license: AGPL-3.0
url: "https://github.com/ultralytics/yolov5"
GitHub Events
Total
- Delete event: 1
- Issue comment event: 6
- Push event: 4
- Pull request event: 4
- Create event: 2
Last Year
- Delete event: 1
- Issue comment event: 6
- Push event: 4
- Pull request event: 4
- Create event: 2
Dependencies
- actions/checkout v4 composite
- actions/setup-python v5 composite
- slackapi/slack-github-action v2.0.0 composite
- contributor-assistant/github-action v2.6.1 composite
- actions/checkout v4 composite
- docker/build-push-action v6 composite
- docker/login-action v3 composite
- docker/setup-buildx-action v3 composite
- docker/setup-qemu-action v3 composite
- ultralytics/actions main composite
- actions/checkout v4 composite
- ultralytics/actions/retry main composite
- actions/checkout v4 composite
- actions/setup-python v5 composite
- actions/stale v9 composite
- pytorch/pytorch 2.0.0-cuda11.7-cudnn8-runtime build
- gcr.io/google-appengine/python latest build
- matplotlib >=3.3.0
- numpy >=1.22.2
- opencv-python >=4.6.0
- pandas >=1.1.4
- pillow >=7.1.2
- psutil *
- py-cpuinfo *
- pyyaml >=5.3.1
- requests >=2.23.0
- scipy >=1.4.1
- seaborn >=0.11.0
- thop >=0.1.1
- torch >=1.8.0
- torchvision >=0.9.0
- tqdm >=4.64.0
- ultralytics >=8.1.47
- PyYAML >=5.3.1
- gitpython >=3.1.30
- matplotlib >=3.3
- numpy >=1.23.5
- opencv-python >=4.1.1
- pandas >=1.1.4
- pillow >=10.3.0
- psutil *
- requests >=2.32.2
- scipy >=1.4.1
- seaborn >=0.11.0
- setuptools >=70.0.0
- thop >=0.1.1
- torchvision >=0.9.0
- tqdm >=4.66.3
- Flask ==2.3.2
- gunicorn ==22.0.0
- pip ==23.3
- werkzeug >=3.0.1
- zipp >=3.19.1