computervision
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (10.0%) to scientific vocabulary
Repository
Basic Info
- Host: GitHub
- Owner: huuphanjr
- License: agpl-3.0
- Language: Python
- Default Branch: main
- Size: 357 KB
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
YOLOv5s-based Self-Checkout Fruit Detection System
GUI demonstration
Installation guide
Step 1: Set Up the YOLOv5s environment
Installing the necessary dependencies as outlined in our repository's README. Clone the YOLOv5 repository to your local machine by following the provided instructions.
Step 2: Prepare annotations in YOLOv5s-compatible format
Convert your annotations, regardless of their initial format (e.g., Pascal VOC, COCO), into the YOLOv5s-compatible format. Utilize the labelimg2yolo.py script available in our repository under the relevant directory. If your annotations are in a different format, adapt the script accordingly.
Step 3: Define data configuration
Create a YAML file (e.g., data.yaml) within the repository's data folder. Inside this file, specify the paths leading to the training and validation image folders. Ensure to include the total count of fruit classes present in your dataset.
Step 4: Initiate training of the fruit detection model
Access the train.py script located in the root directory of our YOLOv5s repository. Customize the training settings like model architecture, batch size, and learning rate. You can modify the script's arguments or use the command line for this purpose. Commence training by executing the following command:
python train.py --data data/data.yaml --cfg models/yolov5s.yaml --weights '' --batch-size 8
Step 5: Monitor and Evaluate Training
While training, observe the progress and metrics displayed in the console. To evaluate the trained model's performance on the validation dataset, use the val.py script:
python val.py --data data/data.yaml --weights path/to/best/weights.pt
Owner
- Login: huuphanjr
- Kind: user
- Repositories: 1
- Profile: https://github.com/huuphanjr
Citation (CITATION.cff)
cff-version: 1.2.0
preferred-citation:
type: software
message: If you use YOLOv5, please cite it as below.
authors:
- family-names: Jocher
given-names: Glenn
orcid: "https://orcid.org/0000-0001-5950-6979"
title: "YOLOv5 by Ultralytics"
version: 7.0
doi: 10.5281/zenodo.3908559
date-released: 2020-5-29
license: AGPL-3.0
url: "https://github.com/ultralytics/yolov5"
GitHub Events
Total
Last Year
Dependencies
- pytorch/pytorch 2.0.0-cuda11.7-cudnn8-runtime build
- gcr.io/google-appengine/python latest build
- matplotlib >=3.3.0
- numpy >=1.22.2
- opencv-python >=4.6.0
- pandas >=1.1.4
- pillow >=7.1.2
- psutil *
- py-cpuinfo *
- pyyaml >=5.3.1
- requests >=2.23.0
- scipy >=1.4.1
- seaborn >=0.11.0
- thop >=0.1.1
- torch >=1.8.0
- torchvision >=0.9.0
- tqdm >=4.64.0
- ultralytics >=8.0.232
- Flask *
- Pillow *
- PyQt5 *
- PySide6 >=6.6.1
- PyYAML >=5.3.1
- matplotlib >=3.2.2
- numpy >=1.18.5
- onnx *
- onnxsim *
- opencv-python >=4.1.2
- pandas *
- psutil *
- pycocotools >=2.0
- requests *
- scipy >=1.4.1
- seaborn >=0.11.0
- tensorboard >=2.4.1
- tensorflow *
- thop *
- torch >=1.7.0
- torchvision >=0.8.1
- tqdm >=4.41.0
- ultralytics *
- utils *
- Flask ==2.3.2
- gunicorn ==19.10.0
- pip ==23.3
- werkzeug >=3.0.1