https://github.com/ammarlodhi255/yolov10-fracture-detection
This repository contains code the official code for the paper "Pediatric Wrist Fracture Detection in X-rays via YOLOv10 Algorithm and Dual Label Assignment System"
Science Score: 49.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 3 DOI reference(s) in README -
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (8.4%) to scientific vocabulary
Keywords
Repository
This repository contains code the official code for the paper "Pediatric Wrist Fracture Detection in X-rays via YOLOv10 Algorithm and Dual Label Assignment System"
Basic Info
- Host: GitHub
- Owner: ammarlodhi255
- License: mit
- Language: Python
- Default Branch: main
- Homepage: https://arxiv.org/abs/2407.15689
- Size: 7.05 MB
Statistics
- Stars: 6
- Watchers: 1
- Forks: 2
- Open Issues: 1
- Releases: 0
Topics
Metadata Files
README.md
Pediatric Wrist Fracture Detection in X-rays via YOLOv10 Algorithm and Dual Label Assignment System
Paper URL: Pediatric Wrist Fracture Detection in X-rays via YOLOv10 Algorithm and Dual Label Assignment System
Wrist fractures are highly prevalent among children and can significantly impact their daily activities, such as attending school, participating in sports, and performing basic self-care tasks. If not treated properly, these fractures can result in chronic pain, reduced wrist functionality, and other long-term complications. Recently, advancements in object detection have shown promise in enhancing fracture detection, with systems achieving accuracy comparable to, or even surpassing, that of human radiologists. The YOLO series, in particular, has demonstrated notable success in this domain. This study is the first to provide a thorough evaluation of various YOLOv10 variants to assess their performance in detecting pediatric wrist fractures using the GRAZPEDWRI-DX dataset. It investigates how changes in model complexity, scaling the architecture, and implementing a dual-label assignment strategy can enhance detection performance. Experimental results indicate that our trained model achieved mean average precision (mAP@50-95) of 51.9% surpassing the current YOLOv9 benchmark of 43.3% on this dataset. This represents an improvement of 8.6%.
Overall Model Architecture
Performance Comparison YOLOv9 vs YOLOv10
| Variant | mAP@50 (%) | mAP@50-95 (%) | F1 (%) | Params (M) | FLOPs (G) | | :-------: | :--------: | :-----------: | :----: | :--------: | :-------: | | YOLOv9-C | 65.3 | 42.7 | 64.0 | 51.0 | 239.0 | | YOLOv9-E | 65.5 | 43.3 | 64.0 | 69.4 | 244.9 | | YOLOv9-C' | 66.2 | 45.2 | 66.7 | 25.3 | 102.4 | | YOLOv9-E' | 67.0 | 44.9 | 70.9 | 57.4 | 189.2 | | YOLOv10-N | 59.5 | 39.1 | 63.0 | 2.7 | 8.2 | | YOLOv10-S | 76.1 | 51.7 | 67.5 | 8.0 | 24.5 | | YOLOv10-M | 75.9 | 51.9 | 67.8 | 16.5 | 63.5 | | YOLOv10-L | 70.9 | 46.6 | 68.7 | 25.7 | 126.4 | | YOLOv10-X | 76.2 | 48.2 | 69.8 | 31.6 | 169.9 |
Requirements
- Linux (Ubuntu)
- Python = 3.12
- Pytorch = 2.3
- NVIDIA GPU + CUDA CuDNN
Environment
pip install -r requirements.txt
Dataset Split
GRAZPEDWRI-DX Dataset (Download Link)
Download dataset and put images and annotatation into
./GRAZPEDWRI-DX_dataset/data/images,./GRAZPEDWRI-DX_dataset/data/labels.Since the authors of the dataset did not provide a split, we randomly partitioned the dataset into a training set of 15,245 images (75%), a validation set of 4,066 images (20%), and a testing set of 1,016 images (5%).
python split.py
- The dataset is divided into training, validation, and testing set (75-20-5%).
The script then will move the files into the relative folder as it is represented here below.
GRAZPEDWRI-DXdataset └── data ├── images │ ├── train │ │ ├── trainimg1.png │ │ ├── trainimg2.png │ │ └── ... │ ├── val │ │ ├── valimg1.png │ │ ├── valimg2.png │ │ └── ... │ └── test │ ├── testimg1.png │ ├── testimg2.png │ └── ... └── labels ├── train │ ├── trainannotation1.txt │ ├── trainannotation2.txt │ └── ... ├── val │ ├── valannotation1.txt │ ├── valannotation2.txt │ └── ... └── test ├── testannotation1.txt ├── test_annotation2.txt └── ...
Weights
You can download the trained weights of YOLOv10 and YOLOv9 on the GRAZPEDWRI-DX dataset from the following link and use them directly in your applications.
- Weights (Download Link)
Train & Validate
Before training the model, make sure the path to the data in the ./data/meta.yaml file is correct.
- meta.yaml
names:
- boneanomaly
- bonelesion
- foreignbody
- fracture
- metal
- periostealreaction
- pronatorsign
- softtissue
- text
nc: 9
path: data/GRAZPEDWRI-DX/data/images
train: data/GRAZPEDWRI-DX/data/images/train
val: data/GRAZPEDWRI-DX/data/images/valid
test: data/GRAZPEDWRI-DX/data/images/test
- Arguments
| Key | Value | Description | | :-----: | :-------: | :---------------------------------------------------------: | | workers | 8 | number of worker threads for data loading (per RANK if DDP) | | device | 0 | device to run on, i.e. device=0,1,2,3 or device=cpu | | model | None | path to model file, i.e. yolov10n.pt, yolov10n.yaml | | batch | 32 | number of images per batch (-1 for AutoBatch) | | data | data.yaml | path to data file, i.e. coco128.yaml | | img | 640 | size of input images as integer, i.e. 640, 1024 | | cfg | yolo.yaml | path to model.yaml, i.e. yolov10n.yaml | | weights | None | initial weights path | | name | exp | save to project/name | | epochs | 100 | number of epochs to train for |
- Example
``` from ultralytics import YOLO
model = YOLO("yolov10x.pt") results=model.train(data='dataset/meta.yaml', epochs=100, imgsz=640, batch=32, name='x')
```
Citation
If you find our paper useful in your research, please consider citing:
``` @article{ahmed2024pediatric, title = {Pediatric Wrist Fracture Detection in X-rays via YOLOv10 Algorithm and Dual Label Assignment System}, author = {Ahmed, Ammar and Manaf, Abdul}, year = {2024}, journal = {arXiv}, eprint = {2407.15689}, note = {arXiv:2407.15689}, url = {https://doi.org/10.48550/arXiv.2407.15689}, doi = {10.48550/arXiv.2407.15689} }
```
Owner
- Name: Ammar Ahmed
- Login: ammarlodhi255
- Kind: user
- Location: Sukkur, Pakistan
- Website: https://www.youtube.com/channel/UCAh8QVO85NLQGj_RhYoTU1w/videos
- Repositories: 9
- Profile: https://github.com/ammarlodhi255
A computer scientist at heart, interested in AI, software development, and space.
GitHub Events
Total
- Watch event: 2
Last Year
- Watch event: 2
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 1
- Total pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 1
- Total pull request authors: 0
- Average comments per issue: 0.0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 1
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 1
- Pull request authors: 0
- Average comments per issue: 0.0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- hwjung92 (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- pytorch/pytorch 2.2.0-cuda12.1-cudnn8-runtime build
- PyYAML ==6.0.1
- gradio ==4.31.5
- huggingface-hub ==0.23.2
- onnx ==1.14.0
- onnxruntime ==1.15.1
- onnxruntime-gpu ==1.18.0
- onnxsim ==0.4.36
- opencv-python ==4.9.0.80
- psutil ==5.9.8
- py-cpuinfo ==9.0.0
- pycocotools ==2.0.7
- safetensors ==0.4.3
- scipy ==1.13.0
- torch ==2.0.1
- torchvision ==0.15.2
- matplotlib >=3.3.0
- opencv-python >=4.6.0
- pandas >=1.1.4
- pillow >=7.1.2
- psutil *
- py-cpuinfo *
- pyyaml >=5.3.1
- requests >=2.23.0
- scipy >=1.4.1
- seaborn >=0.11.0
- thop >=0.1.1
- torch >=1.8.0
- torchvision >=0.9.0
- tqdm >=4.64.0