Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.6%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: kamrankhan361k
  • License: mit
  • Language: Python
  • Default Branch: main
  • Size: 5.46 MB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created over 2 years ago · Last pushed 11 months ago
Metadata Files
Readme Contributing License Citation

README.md


[notebooks](https://github.com/roboflow/notebooks) | [inference](https://github.com/roboflow/inference) | [autodistill](https://github.com/autodistill/autodistill) | [collect](https://github.com/roboflow/roboflow-collect)
[![version](https://badge.fury.io/py/supervision.svg)](https://badge.fury.io/py/supervision) [![downloads](https://img.shields.io/pypi/dm/supervision)](https://pypistats.org/packages/supervision) [![license](https://img.shields.io/pypi/l/supervision)](https://github.com/roboflow/supervision/blob/main/LICENSE.md) [![python-version](https://img.shields.io/pypi/pyversions/supervision)](https://badge.fury.io/py/supervision) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow/supervision/blob/main/demo.ipynb)

👋 hello

We write your reusable computer vision tools. Whether you need to load your dataset from your hard drive, draw detections on an image or video, or count how many detections are in a zone. You can count on us! 🤝

💻 install

Pip install the supervision package in a 3.11>=Python>=3.8 environment.

bash pip install supervision[desktop]

Read more about desktop, headless, and local installation in our guide.

🔥 quickstart

detections processing

```python

import supervision as sv from ultralytics import YOLO

model = YOLO('yolov8s.pt') result = model(IMAGE)[0] detections = sv.Detections.from_ultralytics(result)

len(detections) 5 ```

👉 more detections utils - Easily switch inference pipeline between supported object detection/instance segmentation models ```python >>> import supervision as sv >>> from segment_anything import sam_model_registry, SamAutomaticMaskGenerator >>> sam = sam_model_registry[MODEL_TYPE](checkpoint=CHECKPOINT_PATH).to(device=DEVICE) >>> mask_generator = SamAutomaticMaskGenerator(sam) >>> sam_result = mask_generator.generate(IMAGE) >>> detections = sv.Detections.from_sam(sam_result=sam_result) ``` - [Advanced filtering](https://roboflow.github.io/supervision/quickstart/detections/) ```python >>> detections = detections[detections.class_id == 0] >>> detections = detections[detections.confidence > 0.5] >>> detections = detections[detections.area > 1000] ``` - Image annotation ```python >>> import supervision as sv >>> box_annotator = sv.BoxAnnotator() >>> annotated_frame = box_annotator.annotate( ... scene=IMAGE, ... detections=detections ... ) ```

datasets processing

```python

import supervision as sv

dataset = sv.DetectionDataset.fromyolo( ... imagesdirectorypath='...', ... annotationsdirectorypath='...', ... datayaml_path='...' ... )

dataset.classes ['dog', 'person']

len(dataset) 1000 ```

👉 more dataset utils - Load object detection/instance segmentation datasets in one of the supported formats ```python >>> dataset = sv.DetectionDataset.from_yolo( ... images_directory_path='...', ... annotations_directory_path='...', ... data_yaml_path='...' ... ) >>> dataset = sv.DetectionDataset.from_pascal_voc( ... images_directory_path='...', ... annotations_directory_path='...' ... ) >>> dataset = sv.DetectionDataset.from_coco( ... images_directory_path='...', ... annotations_path='...' ... ) ``` - Loop over dataset entries ```python >>> for name, image, labels in dataset: ... print(labels.xyxy) array([[404. , 719. , 538. , 884.5 ], [155. , 497. , 404. , 833.5 ], [ 20.154999, 347.825 , 416.125 , 915.895 ]], dtype=float32) ``` - Split dataset for training, testing, and validation ```python >>> train_dataset, test_dataset = dataset.split(split_ratio=0.7) >>> test_dataset, valid_dataset = test_dataset.split(split_ratio=0.5) >>> len(train_dataset), len(test_dataset), len(valid_dataset) (700, 150, 150) ``` - Merge multiple datasets ```python >>> ds_1 = sv.DetectionDataset(...) >>> len(ds_1) 100 >>> ds_1.classes ['dog', 'person'] >>> ds_2 = sv.DetectionDataset(...) >>> len(ds_2) 200 >>> ds_2.classes ['cat'] >>> ds_merged = sv.DetectionDataset.merge([ds_1, ds_2]) >>> len(ds_merged) 300 >>> ds_merged.classes ['cat', 'dog', 'person'] ``` - Save object detection/instance segmentation datasets in one of the supported formats ```python >>> dataset.as_yolo( ... images_directory_path='...', ... annotations_directory_path='...', ... data_yaml_path='...' ... ) >>> dataset.as_pascal_voc( ... images_directory_path='...', ... annotations_directory_path='...' ... ) >>> dataset.as_coco( ... images_directory_path='...', ... annotations_path='...' ... ) ``` - Convert labels between supported formats ```python >>> sv.DetectionDataset.from_yolo( ... images_directory_path='...', ... annotations_directory_path='...', ... data_yaml_path='...' ... ).as_pascal_voc( ... images_directory_path='...', ... annotations_directory_path='...' ... ) ``` - Load classification datasets in one of the supported formats ```python >>> cs = sv.ClassificationDataset.from_folder_structure( ... root_directory_path='...' ... ) ``` - Save classification datasets in one of the supported formats ```python >>> cs.as_folder_structure( ... root_directory_path='...' ... ) ```

model evaluation

```python

import supervision as sv

dataset = sv.DetectionDataset.from_yolo(...)

def callback(image: np.ndarray) -> sv.Detections: ... ...

confusion_matrix = sv.ConfusionMatrix.benchmark( ... dataset = dataset, ... callback = callback ... )

confusion_matrix.matrix array([ [0., 0., 0., 0.], [0., 1., 0., 1.], [0., 1., 1., 0.], [1., 1., 0., 0.] ]) ```

👉 more metrics - Mean average precision (mAP) for object detection tasks. ```python >>> import supervision as sv >>> dataset = sv.DetectionDataset.from_yolo(...) >>> def callback(image: np.ndarray) -> sv.Detections: ... ... >>> mean_average_precision = sv.MeanAveragePrecision.benchmark( ... dataset = dataset, ... callback = callback ... ) >>> mean_average_precision.map50_95 0.433 ```

🛠️ built with supervision

Did you build something cool using supervision? Let us know!

https://user-images.githubusercontent.com/26109316/207858600-ee862b22-0353-440b-ad85-caa0c4777904.mp4

🎬 tutorials

Accelerate Image Annotation with SAM and Grounding DINO Accelerate Image Annotation with SAM and Grounding DINO

Created: 20 Apr 2023 | Updated: 20 Apr 2023

Discover how to speed up your image annotation process using Grounding DINO and Segment Anything Model (SAM). Learn how to convert object detection datasets into instance segmentation datasets, and see the potential of using these models to automatically annotate your datasets for real-time detectors like YOLOv8...


SAM - Segment Anything Model by Meta AI: Complete Guide SAM - Segment Anything Model by Meta AI: Complete Guide

Created: 11 Apr 2023 | Updated: 11 Apr 2023

Discover the incredible potential of Meta AI's Segment Anything Model (SAM)! We dive into SAM, an efficient and promptable model for image segmentation, which has revolutionized computer vision tasks. With over 1 billion masks on 11M licensed and privacy-respecting images, SAM's zero-shot performance is often competitive with or even superior to prior fully supervised results...

📚 documentation

Visit our documentation page to learn how supervision can help you build computer vision applications faster and more reliably.

🏆 contribution

We love your input! Please see our contributing guide to get started. Thank you 🙏 to all our contributors!


Owner

  • Name: Kamran Khan
  • Login: kamrankhan361k
  • Kind: user
  • Location: Peshawar , Pakistan
  • Company: Khyber coded

Hey there! 👋 I'm Kamran,a passionate software engineer with a focus on creating exceptional web and mobile applications powered by cutting-edge AI technologies

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: Supervision
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Roboflow
    email: support@roboflow.com
repository-code: 'https://github.com/roboflow/supervision'
url: 'https://roboflow.github.io/supervision/'
abstract: >-
  supervision features a range of utilities for use in
  computer vision projects, from detections processing and
  filtering to confusion matrix calcuation.
keywords:
  - computer vision
  - image processing
  - video processing
license: MIT

GitHub Events

Total
  • Push event: 1
Last Year
  • Push event: 1

Dependencies

.github/workflows/docs.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.github/workflows/publish-test.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • pypa/gh-action-pypi-publish release/v1 composite
.github/workflows/publish.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • pypa/gh-action-pypi-publish release/v1 composite
.github/workflows/test.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.github/workflows/welcome.yml actions
  • actions/first-interaction v1.1.1 composite
examples/tracking/requirements.txt pypi
  • supervision *
  • tqdm *
  • ultralytics *
poetry.lock pypi
  • 160 dependencies
pyproject.toml pypi
  • black ^23.7.0 develop
  • build ^0.10.0 develop
  • flake8 * develop
  • isort ^5.12.0 develop
  • mypy ^1.4.1 develop
  • notebook ^6.5.3 develop
  • pre-commit ^3.3.3 develop
  • pytest ^7.2.2 develop
  • ruff ^0.0.280 develop
  • twine ^4.0.2 develop
  • wheel ^0.40.0 develop
  • mkdocs-material ^9.1.4 docs
  • mkdocstrings ^0.20.0 docs
  • matplotlib ^3.7.1
  • numpy ^1.20.0
  • opencv-python ^4.8.0.74
  • opencv-python-headless ^4.8.0.74
  • pillow ^9.4.0
  • python >=3.8,<3.12.0
  • pyyaml ^6.0
  • scipy ^1.9.0