Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (11.2%) to scientific vocabulary
Repository
Basic Info
- Host: GitHub
- Owner: FITI-HCITA
- License: agpl-3.0
- Language: Python
- Default Branch: human_detect_VA8801
- Size: 14.5 MB
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 3
- Releases: 0
Metadata Files
README.md
AI PIPELINE🚀
See the YOLOv5 Docs for full documentation on training, testing and deployment. See below for quickstart examples.
Install
1. Create python environment. - It is recommended to use **Anaconda** to set up the Python environment. Here is the [Miniconda Install Tutorial](https://medium.com/@hmchang/%E7%B5%A6%E5%88%9D%E5%AD%B8%E8%80%85%E7%9A%84-python-%E5%AE%89%E8%A3%9D%E6%95%99%E5%AD%B8-578bf0de9cf8). - The TFlite conversion is supported by **Python version 3.9.0** and **TensorFlow version 2.13.0**. ```bash conda create --name yolov5 python=3.9.0 conda activate yolov5 pip install tensorflow==2.13.0 pip install Pillow==9.5 ``` 2. Clone repo and install [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) in a [**Python>=3.7.0**](https://www.python.org/) environment, including [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/). ```bash git clone -b human_detect_VA8801 https://github.com/FITI-HCITA/yolov5.git # clone cd yolov5 pip install -r requirements.txt # install ``` 3. Clone VA8801_Model_Zoo (Download VA8801 pretrained models) ```bash git clone https://github.com/FITI-HCITA/VA8801_Model_Zoo.git ```How to Generate Yolov5 model for VA8801?
Prepare Dataset: Use example data at
data/datasetor Use your custom datasetInference: Inference testing data with a TFLite pretrained model, which can be downloaded from the model zoo for the Human model (input=96x96x1)
Please check your local model path -w "pretrained pytorch model path"
Example of your local model folder
path:
VA8801_Model_Zoo/ObjectDetection/Human_Detection/Yolo
bash
python tflite_runtime.py -s data/dataset/test/human_001.jpg -w path/HUMAN_DET_6_001_001.tflite --img_ch 1
2. Infernece: Inference testing data with a TFLite pretrained model, which can be downloaded from the model zoo for the
Human model (input=320x320x3)
bash
python tflite_runtime.py -s data/dataset/test/human_002.jpg -w path/HUMAN_DET_7_002_002.tflite --img_ch 3
- Train model: Transfer learning with a PyTorch pretrained model, which can be downloaded from the model zoo for the Human model (input=96x96x1)
Please check your local model path --weights "pretrained pytorch model path"
Example of your local model folder
path:
VA8801_Model_Zoo/ObjectDetection/Human_Detection/YoloPlease check your PC device --device "cuda device, i.e. 0 or 0,1,2,3 or cpu"
bash
python train.py --device 0 --data data/training_cfg/data_config.yaml --weights path/HUMAN_DET_6_001_001.pt --imgsz 96 --imgch 1 --cfg models/yolov5n_WM005_DM033.yaml
- Train model: Transfer learning with a PyTorch pretrained model, which can be downloaded from the model zoo for the Human model (input=320x320x3)
bash
python train.py --device 0 --data data/training_cfg/data_config.yaml --weights path/HUMAN_DET_7_002_002.pt --imgsz 320 --imgch 3 --cfg models/2_head_yolov5n_WM022.yaml
- Export int8 tflite model
- Please check your local model path --weights "your pytorch model path"
- After training, your trained model will be saved at
results/yyyy_mm_dd/trialx/weights/best.pt
- After training, your trained model will be saved at
- Please check the image size for export to the TFLite model --imgsz_tflite "image size".
Please check your PC device --device "cuda device, i.e. 0 or 0,1,2,3 or cpu"
If model input=96x96x1 ```bash python aipipeline.py --data data/trainingcfg/dataconfig.yaml --weights path/HUMANDET6001001.pt --batch-size 1 --imgch 1 --imgsz 96 --imgsztflite 96 --device 0 --include tflite --int8 --run export
```
If model input=320x320x3 ```bash python aipipeline.py --data data/trainingcfg/dataconfig.yaml --weights path/HUMANDET7002002.pt --batch-size 1 --imgch 3 --imgsz 320 --imgsztflite 320 --device 0 --include tflite --int8 --run export
```
Example for train from scatch
run training only ```bash python ai_pipeline.py --data --cfgOwner
- Login: FITI-HCITA
- Kind: user
- Repositories: 1
- Profile: https://github.com/FITI-HCITA
Citation (CITATION.cff)
cff-version: 1.2.0
preferred-citation:
type: software
message: If you use YOLOv5, please cite it as below.
authors:
- family-names: Jocher
given-names: Glenn
orcid: "https://orcid.org/0000-0001-5950-6979"
title: "YOLOv5 by Ultralytics"
version: 7.0
doi: 10.5281/zenodo.3908559
date-released: 2020-5-29
license: AGPL-3.0
url: "https://github.com/ultralytics/yolov5"
GitHub Events
Total
Last Year
Dependencies
- actions/checkout v3 composite
- github/codeql-action/analyze v2 composite
- github/codeql-action/autobuild v2 composite
- github/codeql-action/init v2 composite
- actions/checkout v3 composite
- docker/build-push-action v4 composite
- docker/login-action v2 composite
- docker/setup-buildx-action v2 composite
- docker/setup-qemu-action v2 composite
- actions/first-interaction v1 composite
- actions/checkout v3 composite
- nick-invision/retry v2 composite
- actions/stale v8 composite
- actions/checkout v3 composite
- actions/setup-node v3 composite
- dephraiim/translate-readme main composite
- pytorch/pytorch 2.0.0-cuda11.7-cudnn8-runtime build
- gcr.io/google-appengine/python latest build
- Pillow >=7.1.2
- PyYAML >=5.3.1
- gitpython >=3.1.30
- matplotlib >=3.3
- numpy >=1.18.5
- opencv-python >=4.1.1
- pandas >=1.1.4
- psutil *
- requests >=2.23.0
- scipy >=1.4.1
- seaborn >=0.11.0
- setuptools >=65.5.1
- thop >=0.1.1
- torchvision >=0.8.1
- tqdm >=4.64.0
- Flask ==2.3.2
- gunicorn ==19.10.0
- pip ==21.1
- werkzeug >=2.2.3