yolov5-pine_tree-object-detection
https://github.com/fatemeh986/yolov5-pine_tree-object-detection
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (8.0%) to scientific vocabulary
Repository
Basic Info
- Host: GitHub
- Owner: fatemeh986
- License: agpl-3.0
- Language: Python
- Default Branch: master
- Size: 23.9 MB
Statistics
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 1
- Releases: 0
Metadata Files
README.md
Pine Tree Detection with YOLOv5
A lightweight pipeline for detecting pine trees in RGB images using YOLOv5. This repository covers:
- Converting Mask R-CNN JSON annotations to YOLO format
- Organizing a custom dataset for YOLOv5
- Running a series of training experiments
- Comparing results (mAP, recall, precision, confusion matrix)
- Analysis of performance and next steps
📂 Repository Structure
.
├── data
│ ├── train
│ │ ├── images
│ │ └── labels
│ └── validations
│ ├── images
│ └── labels
├── yolov5
│ ├── train.py
│ ├── val.py
│ ├── detect.py
│ ├── dataset.yml
│ └── … (YOLOv5 code & configs)
├── convert_to_yolo.py # Script to turn Mask R-CNN JSON → YOLO TXT
└── README.md
🗂 Dataset
One class:
pine_tree(class 0).Structure (root:
data/):
data/
├─ train/images
├─ train/labels ← YOLO TXT files
├─ validations/images
└─ validations/labels
yolov5/dataset.yml:
yaml
# root: yolov5/
path: ../data
train: train/images
val: validations/images
nc: 1
names:
0: pine_tree
🔄 Annotation Conversion
Use convert_to_yolo.py to turn your Mask R-CNN JSONs into YOLO .txt label files:
```bash python converttoyolo.py \ --jsondir data/train/annots \ --labelsdir data/train/labels
python converttoyolo.py \ --jsondir data/validations/annots \ --labelsdir data/validations/labels ```
⚙️ Training Experiments
We ran six distinct experiments, each saved under runs/train/<name>/:
| Name | Model | Key Trick |
| --------------- | -------- | ---------------------------------------- |
| pine_tree_run | YOLOv5-s | default training (50 epochs) |
| exp_freeze | YOLOv5-s | first 30 ep freezing backbone → +20 ep |
| exp_unfreeze | YOLOv5-s | fine-tune all layers |
| exp_anchor | YOLOv5-s | (auto)recomputed anchors |
| exp_evolve | YOLOv5-s | hyperparameter evolution (--evolve) |
| exp_yolov5m2 | YOLOv5-m | medium model + best exp_evolve hparams |
Each was launched with a one-liner, e.g.:
bash
python yolov5/train.py \
--weights yolov5s.pt \
--data dataset.yml \
--img 640 \
--batch 16 \
--epochs 50 \
--name exp_anchor
📊 Results
Quantitative Metrics
| Experiment | Precision | Recall | mAP\@50 | mAP\@50-95 | “Accuracy” (TP/(TP+FP+FN)) | | --------------- | :-------: | :----: | :-----: | :--------: | :------------------------: | | pine_tree_run | 0.664 | 0.600 | 0.646 | 0.417 | 46.2 % | | exp_yolov5m2 | — | 0.830 | — | — | 41.5 % |
(Detailed logs are in each run’s results.txt.)
Best Confusion Matrix

- True Positive Rate (recall): 83 %
- False Negative Rate: 17 %
- False Positive Rate (background→pine): 100 %
- Implicit background class yields a 2×2 matrix for single‐class detection.
🖼️ Example Detections
Here are two sample detection outputs:


🔍 Analysis
- Dataset Challenges
- Size & diversity: only ~150 train images with limited backgrounds
- Image quality: some images are low-contrast or blurred
- Single‐class, implicit negatives: no explicit “background” boxes
- Model Capacity & Augmentation
- Upgrading to YOLOv5-m and applying evolved hyperparameters boosted recall from 60 % → 83 %.
- Precision remains limited by high false positives on complex backgrounds.
- Detection‐Accuracy vs. Classification
- In object detection, we measure precision/recall/mAP; the “accuracy” here is TP/(TP+FP+FN) ≈ 41 %.
🚀 Next Steps
- Expand & diversify data: collect more scenes (different lighting, angles, seasons).
- Improve annotations: tighter, more consistent bounding boxes; consider adding a few explicit “background” crops for negative sampling.
- Multi-spectral inputs: include NIR or depth channels if available.
- Advanced augmentation: copy-paste, stronger color/scale/perspective variants.
- Ensemble & TTA: combine multiple runs or test-time augmentations for gain.
▶️ Usage
- Install dependencies
bash
pip install -r yolov5/requirements.txt
2. Convert annotations
bash
python convert_to_yolo.py ...
3. Train
bash
python yolov5/train.py --data yolov5/dataset.yml --name your_experiment
4. Validate
bash
python yolov5/val.py --weights runs/train/your_experiment/weights/best.pt \
--data yolov5/dataset.yml \
--save-conf --verbose --name val_your_experiment
5. Detect
bash
python yolov5/detect.py --weights runs/train/your_experiment/weights/best.pt \
--source path/to/images \
--save-txt --name detect_your_experiment
Acknowledgments
- Ultralytics YOLOv5
- Early experiments with Mask R-CNN annotations
Owner
- Name: Fatemeh Karamian
- Login: fatemeh986
- Kind: user
- Repositories: 1
- Profile: https://github.com/fatemeh986
Citation (CITATION.cff)
cff-version: 1.2.0
preferred-citation:
type: software
message: If you use YOLOv5, please cite it as below.
authors:
- family-names: Jocher
given-names: Glenn
orcid: "https://orcid.org/0000-0001-5950-6979"
title: "YOLOv5 by Ultralytics"
version: 7.0
doi: 10.5281/zenodo.3908559
date-released: 2020-5-29
license: AGPL-3.0
url: "https://github.com/ultralytics/yolov5"
GitHub Events
Total
- Issue comment event: 2
- Push event: 6
- Create event: 3
Last Year
- Issue comment event: 2
- Push event: 6
- Create event: 3
Dependencies
- actions/checkout v4 composite
- actions/setup-python v5 composite
- astral-sh/setup-uv v6 composite
- slackapi/slack-github-action v2.1.0 composite
- contributor-assistant/github-action v2.6.1 composite
- actions/checkout v4 composite
- docker/build-push-action v6 composite
- docker/login-action v3 composite
- docker/setup-buildx-action v3 composite
- docker/setup-qemu-action v3 composite
- ultralytics/actions main composite
- actions/checkout v4 composite
- ultralytics/actions/retry main composite
- actions/checkout v4 composite
- actions/setup-python v5 composite
- actions/stale v9 composite
- pytorch/pytorch 2.0.0-cuda11.7-cudnn8-runtime build
- gcr.io/google-appengine/python latest build
- matplotlib >=3.3.0
- numpy >=1.22.2
- opencv-python >=4.6.0
- pandas >=1.1.4
- pillow >=7.1.2
- psutil *
- py-cpuinfo *
- pyyaml >=5.3.1
- requests >=2.23.0
- scipy >=1.4.1
- seaborn >=0.11.0
- thop >=0.1.1
- torch >=1.8.0
- torchvision >=0.9.0
- tqdm >=4.64.0
- ultralytics >=8.2.64
- PyYAML >=5.3.1
- gitpython >=3.1.30
- matplotlib >=3.3
- numpy >=1.23.5
- opencv-python >=4.1.1
- pandas >=1.1.4
- pillow >=10.3.0
- psutil *
- requests >=2.32.2
- scipy >=1.4.1
- seaborn >=0.11.0
- setuptools >=70.0.0
- thop >=0.1.1
- torchvision >=0.9.0
- tqdm >=4.66.3
- Flask ==2.3.2
- gunicorn ==23.0.0
- pip ==23.3
- werkzeug >=3.0.1
- zipp >=3.19.1