full_yolov11_tutorial

Mainly serves as guidance for learning about YOLOv11 setup. Also a backup for NAME Project

https://github.com/dannyboy849/full_yolov11_tutorial

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.3%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Mainly serves as guidance for learning about YOLOv11 setup. Also a backup for NAME Project

Basic Info
  • Host: GitHub
  • Owner: dannyboy849
  • License: mit
  • Language: Dockerfile
  • Default Branch: main
  • Homepage:
  • Size: 257 MB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created about 1 year ago · Last pushed 8 months ago
Metadata Files
Readme License Citation

README.md

This Repository Serves As A Start-To-Finish Guide For Convolutional Neural Network Data Training (via YOLOv11)

For the most part, all of the work mentioned below will be in your bash terminal, but modifying any of the files inside YOLO will be in Python.

I, Daniel Vargas, am the sole creator of this repository, so if there is anything missing or needs clarification, please feel free to reach out to me at dvargas88@ou.edu. Please note that I am a Mechanical Engineer by training, and I had to learn about Programming, Docker, and CNNs in 3 months. So, have mercy if there are any mistakes. Please notify me if there are issues or if I misguide a section as I always appreciate feedback and criticism. Anyways, hope this helps - good luck and happy data training!!

Its also very important to note that this is all performed on an NVIDIA GPU. I am unware how this effects any of the following instructions for AMD, but I don't imagine it would effect it much besides GPU-usage tracking and setting up a GPU-enabled Docker container.


If you use this repository for guidance, please cite this repo

First, I would like to give credit to my co-researcher, Mathis Morales, for their repository that helped tremendously with the set up of the Docker Container, as well as their advice throughout the project!

bash https://github.com/airou-lab/Docker-GPU-Tutorial.git

Recommend Order Of Completion

Step 1. Getting_Started

Step 2. Docker_Installation

Step 3. Image_Conversion (Optional)

Do this to convert your video (.mp4) to images (.PNG) to feed into CVAT for data_annotation

Step 4. YOLOv11

Step 5. Hyperparameterization Automation (Optional)

Do this if you want to find the optimal hyperparameters for your model.

Done! Congratulations On Your Hard Work!


Daniel's Project Summary

This is my repository I originally made for a research project called * link inserted if paper is accepted *. If the link is not inserted, this means its still under review. I am not allowed to discuss the project details as it is an anonymous submission. I have uploaded some of my experimental results, and I'm treating this repositroy as a backup for my files, but also as a free dataset for anyone who wants to work with it. If the experiment folder is also lacking data, this is due to the reason mentioned earlier. But again, if there is anything missing or needs clarification, please feel free to reach out at dvargas88@ou.edu.

Summary Results

For YOLOv5

  • Our results of Camera 1 was 54% precison for tracking and classification and 23.4% accurate in IoU (mAP-95)

  • Our results of Camera 3, our accuracy was 64% for tracking and classification and 27.3% accurate in IoU (mAP-95)

  • As you may notice, these results are low. On top of this fact, these arent even our camera feed combined together yet. As of now, we believe this is due to our parameters, as well as the data input. We will feed it our full 9600 images instead of the half we have been doing up until now. We will also resort to using YOLOv11 instead as YOLOv5 is now 5 years old.

  • I will be testing on YOLOv11 soon - working on a very complicated installation.

Update!

  • Complicated installation completed! We now have an accuracy using simultaneous tracking-and-classification of 71.4% for both camera angles! BUT, our tracking-THEN-classification acheived 87.5% accuracy! Hooray for StackExchange!

Owner

  • Name: Daniel
  • Login: dannyboy849
  • Kind: user
  • Company: @airou-lab

University of Oklahoma - Graduate Robotics and Drone Researcher

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: Sparrow3D Dataset
message: >-
  If you use this dataset, please cite it using the metadata
  from this file.
type: dataset
authors:
  - given-names: Daniel
    family-names: Vargas
    email: dvargas88@ou.edu
    affiliation: Student
    orcid: 'https://orcid.org/0009-0000-2877-847X'
repository-code: 'https://github.com/dannyboy849/Full_YOLOv11_Tutorial'
abstract: >-
  Serves as guidance for learning about YOLOv11 setup, our
  annotated house sparrow dataset, and the sequence to
  recreate our application.
license: CC-BY-4.0
commit: Preliminary Conference Paper Submission Dataset
version: 1.0.1
date-released: '2025-07-13'

GitHub Events

Total
  • Push event: 126
Last Year
  • Push event: 126

Dependencies

Docker_Installation/Dockerfile docker
  • pytorch/pytorch 2.6.0-cuda12.6-cudnn9-runtime build