Science Score: 49.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.7%) to scientific vocabulary
Last synced: 7 months ago · JSON representation

Repository

Basic Info
  • Host: GitHub
  • Owner: lck981202
  • License: agpl-3.0
  • Language: Python
  • Default Branch: master
  • Size: 48.9 MB
Statistics
  • Stars: 1
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created about 2 years ago · Last pushed about 2 years ago
Metadata Files
Readme License Citation

README.md

Honeybee in-and-out activity counting based on YOLOv8 and boxmot


CI CPU testing
Open In Colab DOI

Introduction

This repo contains a collections of state-of-the-art multi-object trackers. Some of them are based on motion only, others on motion + appearance description. For the latter, state-of-the-art ReID model are downloaded automatically as well. Supported ones at the moment are: DeepOCSORT LightMBN, BoTSORT LightMBN, StrongSORT LightMBN, OCSORT and ByteTrack. They can be found in Boxmot, and our work is from Boxmot.

We provide examples on how to use this package together with popular object detection models. Right now Yolov8, Yolo-NAS and YOLOX are available. Based on YOLOv8 and MOT models, we set the single-line method and box method.

Tutorials * [Yolov8 training (link to external repository)](https://docs.ultralytics.com/modes/train/)  * [Deep appearance descriptor training (link to external repository)](https://kaiyangzhou.github.io/deep-person-reid/user_guide.html)  * [ReID model export to ONNX, OpenVINO, TensorRT and TorchScript](https://github.com/mikel-brostrom/yolov8_tracking/wiki/ReID-multi-framework-model-export)  * [Evaluation on custom tracking dataset](https://github.com/mikel-brostrom/yolov8_tracking/wiki/How-to-evaluate-on-custom-tracking-dataset)  * [ReID inference acceleration with Nebullvm](https://colab.research.google.com/drive/1APUZ1ijCiQFBR9xD0gUvFUOC8yOJIvHm?usp=sharing) 
Experiments In inverse chronological order: * [Evaluation of the params evolved for first half of MOT17 on the complete MOT17](https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet/wiki/Evaluation-of-the-params-evolved-for-first-half-of-MOT17-on-the-complete-MOT17) * [MOT metrics](https://github.com/cheind/py-motmetrics) * [Darklabel for MOT](https://github.com/darkpgmr/DarkLabel)
Datasets In inverse chronological order: * [honeybee detection training set](https://zenodo.org/records/10827406) * [test set for MOT](https://zenodo.org/records/10828561)
## Why using this counting method? According to the relative position of the camera and the entrance of the bee colony and the movement of the bees, some honey bees enter the beehive from below and from the side. Setting the detection line only above the entrance and exit cannot solve the above problem. Therefore, we changed the detection line to a detection box and named the method as box method which solved the problem of some bees entering the beehive from other directions. The box method also brings a problem that some bees only pass through the detection box and do not actually enter the hive. ## Installation Start with [**Python>=3.8**](https://www.python.org/) environment. If you want to run the YOLOv8, YOLO-NAS or YOLOX examples: ``` git clone https://github.com/mikel-brostrom/yolo_tracking.git pip install -v -e . ``` but if you only want to import the tracking modules you can simply: ``` pip install boxmot ``` ## Tracking and Counting examples
Tracking
Yolo models ```bash $ python examples/track.py --yolo-model yolov8n # bboxes only python examples/track.py --yolo-model yolo_nas_s # bboxes only python examples/track.py --yolo-model yolox_n # bboxes only yolov8n-seg # bboxes + segmentation masks yolov8n-pose # bboxes + pose estimation ```
Tracking methods ```bash $ python examples/track.py --tracking-method deepocsort strongsort ocsort bytetrack botsort ```
Tracking sources Tracking can be run on most video formats ```bash $ python examples/track.py --source 0 # webcam img.jpg # image vid.mp4 # video path/ # directory path/*.jpg # glob 'https://youtu.be/Zgi9g1ksQHc' # YouTube 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream ```
Select Yolov8 model There is a clear trade-off between model inference speed and overall performance. In order to make it possible to fulfill your inference speed/accuracy needs you can select a Yolov5 family model for automatic download. These model can be further optimized for you needs by the [export.py](https://github.com/ultralytics/yolov5/blob/master/export.py) script ```bash $ python examples/track.py --source 0 --yolo-model yolov8n.pt --img 640 yolov8s.tflite yolov8m.pt yolov8l.onnx yolov8x.pt --img 1280 ... ```
Select ReID model Some tracking methods combine appearance description and motion in the process of tracking. For those which use appearance, you can choose a ReID model based on your needs from this [ReID model zoo](https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO). These model can be further optimized for you needs by the [reid_export.py](https://github.com/mikel-brostrom/Yolov5_StrongSORT_OSNet/blob/master/reid_export.py) script ```bash $ python examples/track.py --source 0 --reid-model lmbn_n_cuhk03_d.pt osnet_x0_25_market1501.pt mobilenetv2_x1_4_msmt17.engine resnet50_msmt17.onnx osnet_x1_0_msmt17.pt ... ```
Filter tracked classes By default the tracker tracks all MS COCO classes. If you want to track a subset of the classes that you model predicts, add their corresponding index after the classes flag, ```bash python examples/track.py --source 0 --yolo-model yolov8s.pt --classes 16 17 # COCO yolov8 model. Track cats and dogs, only ``` [Here](https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/) is a list of all the possible objects that a Yolov8 model trained on MS COCO can detect. Notice that the indexing for the classes in this repo starts at zero
MOT compliant results Can be saved to your experiment folder `runs/track/_/` by ```bash python examples/track.py --source ... --save-txt ```

Counting method

Demo1 is the single-line method for show, Demo1hide is the single-line method for work Demo2 is the Box method for show,, Demo2hide is the Box method for work

bash $ python examples/demo_1.py --tracking-method deepocsort strongsort ocsort bytetrack botsort

Counting sources Counting can be run on most video formats ```bash $ python examples/demo_1.py --source 0 # webcam img.jpg # image vid.mp4 # video path/ # directory path/*.jpg # glob 'https://youtu.be/Zgi9g1ksQHc' # YouTube 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream ```

Custom object detection model example

Click to exapand! ```python from boxmot import DeepOCSORT from pathlib import Path tracker = DeepOCSORT( model_weights=Path('osnet_x0_25_msmt17.pt'), # which ReID model to use device='cuda:0', # 'cpu', 'cuda:0', 'cuda:1', ... 'cuda:N' fp16=True, # wether to run the ReID model with half precision or not ) cap = cv.VideoCapture(0) while True: ret, im = cap.read() ... # dets (numpy.ndarray): # - your model's nms:ed outputs of shape Nx6 (x, y, x, y, conf, cls) # im (numpy.ndarray): # - the original hxwx3 image (for better ReID results) # - the downscaled hxwx3 image fed to you model (faster) tracker_outputs = tracker.update(dets, im) # --> (x, y, x, y, id, conf, cls) ... ```

Owner

  • Login: lck981202
  • Kind: user

GitHub Events

Total
Last Year

Dependencies

.github/workflows/ci.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.github/workflows/publish.yml actions
  • actions/checkout v3 composite
  • actions/create-release v1 composite
  • actions/setup-python v4 composite
  • pypa/gh-action-pypi-publish 27b31702a0e7fc50959f5ad993c78deac1bdfc29 composite
.github/workflows/stale.yml actions
  • actions/stale v3 composite
Dockerfile docker
  • nvcr.io/nvidia/pytorch 22.11-py3 build
requirements.txt pypi
  • GitPython >=3.1.0
  • PyYAML >=5.3.1
  • filterpy >=1.4.5
  • gdown >=4.7.1
  • loguru >=0.7.0
  • numpy ==1.23.1
  • opencv-python >=4.6.0
  • pandas >=1.1.4
  • torch >=1.7.0
  • torchvision >=0.8.1
setup.py pypi