musong
Packaged version of ultralytics/yolov5 + many extra features
Science Score: 36.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
✓Committers with academic emails
2 of 15 committers (13.3%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.1%) to scientific vocabulary
Keywords
Keywords from Contributors
Repository
Packaged version of ultralytics/yolov5 + many extra features
Basic Info
- Host: GitHub
- Owner: fcakyon
- License: gpl-3.0
- Language: Python
- Default Branch: main
- Homepage: https://pypi.org/project/yolov5/
- Size: 1.56 MB
Statistics
- Stars: 294
- Watchers: 4
- Forks: 70
- Open Issues: 3
- Releases: 54
Topics
Metadata Files
README.md
packaged ultralytics/yolov5
pip install yolov5
Overview
This yolov5 package contains everything from ultralytics/yolov5 at this commit plus:
1. Easy installation via pip: pip install yolov5
2. Full CLI integration with fire package
3. COCO dataset format support (for training)
4. Full 🤗 Hub integration
5. S3 support (model and dataset upload)
6. NeptuneAI logger support (metric, model and dataset logging)
7. Classwise AP logging during experiments
Install
Install yolov5 using pip (for Python >=3.7)
console
pip install yolov5
Model Zoo
Use from Python
```python import yolov5
load pretrained model
model = yolov5.load('yolov5s.pt')
or load custom model
model = yolov5.load('train/best.pt')
set model parameters
model.conf = 0.25 # NMS confidence threshold model.iou = 0.45 # NMS IoU threshold model.agnostic = False # NMS class-agnostic model.multilabel = False # NMS multiple labels per box model.maxdet = 1000 # maximum number of detections per image
set image
img = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg'
perform inference
results = model(img)
inference with larger input size
results = model(img, size=1280)
inference with test time augmentation
results = model(img, augment=True)
parse results
predictions = results.pred[0] boxes = predictions[:, :4] # x1, y1, x2, y2 scores = predictions[:, 4] categories = predictions[:, 5]
show detection bounding boxes on image
results.show()
save results into "results/" folder
results.save(save_dir='results/')
```
Train/Detect/Test/Export
- You can directly use these functions by importing them: ```python from yolov5 import train, val, detect, export # from yolov5.classify import train, val, predict # from yolov5.segment import train, val, predict train.run(imgsz=640, data='coco128.yaml') val.run(imgsz=640, data='coco128.yaml', weights='yolov5s.pt') detect.run(imgsz=640) export.run(imgsz=640, weights='yolov5s.pt') ``` - You can pass any argument as input: ```python from yolov5 import detect img_url = 'https://github.com/ultralytics/yolov5/raw/master/data/images/zidane.jpg' detect.run(source=img_url, weights="yolov5s6.pt", conf_thres=0.25, imgsz=640) ```Use from CLI
You can call yolov5 train, yolov5 detect, yolov5 val and yolov5 export commands after installing the package via pip:
Training
- Finetune one of the pretrained YOLOv5 models using your custom `data.yaml`: ```bash $ yolov5 train --data data.yaml --weights yolov5s.pt --batch-size 16 --img 640 yolov5m.pt 8 yolov5l.pt 4 yolov5x.pt 2 ``` - Start a training using a COCO formatted dataset: ```yaml # data.yml train_json_path: "train.json" train_image_dir: "train_image_dir/" val_json_path: "val.json" val_image_dir: "val_image_dir/" ``` ```bash $ yolov5 train --data data.yaml --weights yolov5s.pt ``` - Train your model using [Roboflow Universe](https://universe.roboflow.com/) datasets (roboflow>=0.2.29 required): ```bash $ yolov5 train --data DATASET_UNIVERSE_URL --weights yolov5s.pt --roboflow_token YOUR_ROBOFLOW_TOKEN ``` Where `DATASET_UNIVERSE_URL` must be in `https://universe.roboflow.com/workspace_name/project_name/project_version` format. - Visualize your experiments via [Neptune.AI](https://neptune.ai/) (neptune-client>=0.10.10 required): ```bash $ yolov5 train --data data.yaml --weights yolov5s.pt --neptune_project NAMESPACE/PROJECT_NAME --neptune_token YOUR_NEPTUNE_TOKEN ``` - Automatically upload weights to [Huggingface Hub](https://huggingface.co/models?other=yolov5): ```bash $ yolov5 train --data data.yaml --weights yolov5s.pt --hf_model_id username/modelname --hf_token YOUR-HF-WRITE-TOKEN ``` - Automatically upload weights and datasets to AWS S3 (with Neptune.AI artifact tracking integration): ```bash export AWS_ACCESS_KEY_ID=YOUR_KEY export AWS_SECRET_ACCESS_KEY=YOUR_KEY ``` ```bash $ yolov5 train --data data.yaml --weights yolov5s.pt --s3_upload_dir YOUR_S3_FOLDER_DIRECTORY --upload_dataset ``` - Add `yolo_s3_data_dir` into `data.yaml` to match Neptune dataset with a present dataset in S3. ```yaml # data.yml train_json_path: "train.json" train_image_dir: "train_image_dir/" val_json_path: "val.json" val_image_dir: "val_image_dir/" yolo_s3_data_dir: s3://bucket_name/data_dir/ ```Inference
yolov5 detect command runs inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases) and saving results to `runs/detect`. ```bash $ yolov5 detect --source 0 # webcam file.jpg # image file.mp4 # video path/ # directory path/*.jpg # glob rtsp://170.93.143.139/rtplive/470011e600ef003a004ee33696235daa # rtsp stream rtmp://192.168.1.105/live/test # rtmp stream http://112.50.243.8/PLTV/88888888/224/3221225900/1.m3u8 # http stream ```Export
You can export your fine-tuned YOLOv5 weights to any format such as `torchscript`, `onnx`, `coreml`, `pb`, `tflite`, `tfjs`: ```bash $ yolov5 export --weights yolov5s.pt --include torchscript,onnx,coreml,pb,tfjs ```Classify
Train/Val/Predict with YOLOv5 image classifier: ```bash $ yolov5 classify train --img 640 --data mnist2560 --weights yolov5s-cls.pt --epochs 1 ``` ```bash $ yolov5 classify predict --img 640 --weights yolov5s-cls.pt --source images/ ```Segment
Train/Val/Predict with YOLOv5 instance segmentation model: ```bash $ yolov5 segment train --img 640 --weights yolov5s-seg.pt --epochs 1 ``` ```bash $ yolov5 segment predict --img 640 --weights yolov5s-seg.pt --source images/ ```Owner
- Name: fatih akyon
- Login: fcakyon
- Kind: user
- Location: Ankara, Turkiye
- Company: @viddexa @ultralytics
- Twitter: fcakyon
- Repositories: 139
- Profile: https://github.com/fcakyon
helping AI's to understand videos at @ultralytics & @viddexa
GitHub Events
Total
- Watch event: 6
- Delete event: 1
- Issue comment event: 4
- Push event: 2
- Pull request review comment event: 5
- Pull request review event: 8
- Pull request event: 5
- Fork event: 3
- Create event: 1
Last Year
- Watch event: 6
- Delete event: 1
- Issue comment event: 4
- Push event: 2
- Pull request review comment event: 5
- Pull request review event: 8
- Pull request event: 5
- Fork event: 3
- Create event: 1
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| fatih | 3****n | 234 |
| Piotr Skalski | p****2@g****m | 2 |
| Kadir Nar | k****r@h****m | 2 |
| ngxingyu | n****u@u****u | 1 |
| merdini | m****n@g****m | 1 |
| Zacchaeus Scheffer | S****t | 1 |
| Petros626 | 6****6 | 1 |
| Muhammad Salman Kabir | 5****n | 1 |
| Muhammad Arif Faizin | 4****n | 1 |
| Lai Quang Huy | 6****h | 1 |
| Kazybek Askarbek | k****k@i****y | 1 |
| Juan Carlos Roman | j****r@g****m | 1 |
| Ihsan Soydemir | s****n@g****m | 1 |
| Hasan Emir Akın | 9****n | 1 |
| ABYZOU | 8****e | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 0
- Total pull requests: 116
- Average time to close issues: N/A
- Average time to close pull requests: 6 days
- Total issue authors: 0
- Total pull request authors: 19
- Average comments per issue: 0
- Average comments per pull request: 0.45
- Merged pull requests: 98
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 6
- Average time to close issues: N/A
- Average time to close pull requests: 29 days
- Issue authors: 0
- Pull request authors: 3
- Average comments per issue: 0
- Average comments per pull request: 1.0
- Merged pull requests: 2
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
- fcakyon (86)
- SIR-unit (3)
- SkalskiP (3)
- kadirnar (3)
- lachiewalker (2)
- muhammadariffaizin (2)
- topherbuckley (2)
- jc-roman (2)
- CanKorkut (2)
- 1qh (2)
- Isydmr (1)
- keremberke (1)
- dongfz (1)
- severin-lemaignan (1)
- 5a7man (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 2
-
Total downloads:
- pypi 71,496 last-month
- Total docker downloads: 819
-
Total dependent packages: 17
(may contain duplicates) -
Total dependent repositories: 72
(may contain duplicates) - Total versions: 59
- Total maintainers: 2
pypi.org: yolov5
Packaged version of the Yolov5 object detector
- Homepage: https://github.com/fcakyon/yolov5-pip
- Documentation: https://yolov5.readthedocs.io/
- License: GPL
-
Latest release: 7.0.14
published over 1 year ago
Rankings
Maintainers (1)
pypi.org: musong
Packaged version of the Yolov5 object detector
- Homepage: https://github.com/fcakyon/yolov5-pip
- Documentation: https://musong.readthedocs.io/
- License: GPL
-
Latest release: 0.0.6
published about 3 years ago
Rankings
Maintainers (1)
Dependencies
- Pillow >=7.1.2
- PyYAML >=5.3.1
- boto3 >=1.19.1
- fire *
- matplotlib >=3.2.2
- numpy >=1.18.5
- opencv-python >=4.1.2
- pandas >=1.1.4
- protobuf <=3.20.1
- requests >=2.23.0
- sahi >=0.9.1
- scipy >=1.4.1
- seaborn >=0.11.0
- tensorboard >=2.4.1
- thop *
- torch >=1.7.0
- torchvision >=0.8.1
- tqdm >=4.41.0
- actions/cache v3 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- actions/cache v3 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite