Recent Releases of supervision

supervision - supervision-0.26.1

๐Ÿ”ง Fixed

  • Fixed error in sv.MeanAveragePrecision where the area used for size-specific evaluation (small / medium / large) was always zero unless explicitly provided in sv.Detections.data. (https://github.com/roboflow/supervision/pull/1894)
  • Fixed ID=0 bug in sv.MeanAveragePrecision where objects were getting 0.0 mAP despite perfect IoU matches due to a bug in annotation ID assignment. (https://github.com/roboflow/supervision/pull/1895)
  • Fixed issue where sv.MeanAveragePrecision could return negative values when certain object size categories have no data. (https://github.com/roboflow/supervision/pull/1898)
  • Fixed match_metric support for sv.Detections.with_nms. (https://github.com/roboflow/supervision/pull/1901)
  • Fixed border_thickness parameter usage for sv.PercentageBarAnnotator. (https://github.com/roboflow/supervision/pull/1906)

๐Ÿ† Contributors

@balthazur (Balthasar Huber), @onuralpszr (Onuralp SEZER), @rafaelpadilla (Rafael Padilla), @soumik12345 (Soumik Rakshit), @SkalskiP (Piotr Skalski)

- Python
Published by soumik12345 7 months ago

supervision - supervision-0.26.0

[!WARNING]
supervision-0.26.0 drops python3.8 support and upgrade all codes to python3.9 syntax style.

[!TIP] Our docs page now has a fresh look that is consistent with the documentations of all Roboflow open-source projects. (#1858)

๐Ÿš€ Added

  • Added support for creating sv.KeyPoints objects from ViTPose and ViTPose++ inference results via sv.KeyPoints.from_transformers. (#1788)

    https://github.com/user-attachments/assets/f1917032-29d8-4b88-b871-65c2e28a756e

  • Added support for the IOS (Intersection over Smallest) overlap metric that measures how much of the smaller object is covered by the larger one in sv.Detections.with_nms, sv.Detections.with_nmm, sv.box_iou_batch, and sv.mask_iou_batch. (#1774)

    ```python import numpy as np import supervision as sv

    boxestrue = np.array([ [100, 100, 200, 200], [300, 300, 400, 400] ]) boxesdetection = np.array([ [150, 150, 250, 250], [320, 320, 420, 420] ])

    sv.boxioubatch( boxestrue=boxestrue, boxesdetection=boxesdetection, overlap_metric=sv.OverlapMetric.IOU )

    array([[0.14285714, 0. ],

    [0. , 0.47058824]])

    sv.boxioubatch( boxestrue=boxestrue, boxesdetection=boxesdetection, overlap_metric=sv.OverlapMetric.IOS )

    array([[0.25, 0. ],

    [0. , 0.64]])

    ```

  • Added sv.box_iou that efficiently computes the Intersection over Union (IoU) between two individual bounding boxes. (#1874)

  • Added support for frame limitations and progress bar in sv.process_video. (#1816)

  • Added sv.xyxy_to_xcycarh function to convert bounding box coordinates from (x_min, y_min, x_max, y_max) into measurement space to format (center x, center y, aspect ratio, height), where the aspect ratio is width / height. (#1823)

  • Addedย sv.xyxy_to_xywh function to convert bounding box coordinates from (x_min, y_min, x_max, y_max) format to (x, y, width, height) format. (#1788)

๐ŸŒฑ Changed

  • sv.LabelAnnotator now supports the smart_position parameter to automatically keep labels within frame boundaries, and the max_line_length parameter to control text wrapping for long or multi-line labels. (#1820)

    https://github.com/user-attachments/assets/361c17c7-0810-466d-907d-c752e91bc6f7

    Snap (25)

  • sv.LabelAnnotator now supports non-string labels. (#1825)

  • sv.Detections.from_vlm now supports parsing bounding boxes and segmentation masks from responses generated by Google Gemini models. You can test Gemini prompting, result parsing, and visualization with Supervision using this example notebook. (#1792)

```python import supervision as sv

gemini_response_text = """```json
    [
        {"box_2d": [543, 40, 728, 200], "label": "cat", "id": 1},
        {"box_2d": [653, 352, 820, 522], "label": "dog", "id": 2}
    ]
```"""

detections = sv.Detections.from_vlm(
    sv.VLM.GOOGLE_GEMINI_2_5,
    gemini_response_text,
    resolution_wh=(1000, 1000),
    classes=['cat', 'dog'],
)

detections.xyxy
# array([[543., 40., 728., 200.], [653., 352., 820., 522.]])

detections.data
# {'class_name': array(['cat', 'dog'], dtype='<U26')}

detections.class_id
# array([0, 1])
```

Snap (27)

  • sv.Detections.from_vlm now supports parsing bounding boxes from responses generated by Moondream. (#1878)

    ```python import supervision as sv

    moondreamresult = { 'objects': [ { 'xmin': 0.5704046934843063, 'ymin': 0.20069346576929092, 'xmax': 0.7049859315156937, 'ymax': 0.3012596592307091 }, { 'xmin': 0.6210969910025597, 'ymin': 0.3300672620534897, 'xmax': 0.8417936339974403, 'y_max': 0.4961046129465103 } ] }

    detections = sv.Detections.fromvlm( sv.VLM.MOONDREAM, moondreamresult, resolution_wh=(1000, 1000), )

    detections.xyxy

    array([[1752.28, 818.82, 2165.72, 1229.14],

    [1908.01, 1346.67, 2585.99, 2024.11]])

    ```

  • sv.Detections.from_vlm now supports parsing bounding boxes from responses generated by Qwen-2.5 VL. You can test Qwen2.5-VL prompting, result parsing, and visualization with Supervision using this example notebook. (#1709)

    ```python import supervision as sv

    qwen25vlresult = """json [ {"bbox_2d": [139, 768, 315, 954], "label": "cat"}, {"bbox_2d": [366, 679, 536, 849], "label": "dog"} ] """

    detections = sv.Detections.fromvlm( sv.VLM.QWEN25VL, qwen25vlresult, inputwh=(1000, 1000), resolutionwh=(1000, 1000), classes=['cat', 'dog'], )

    detections.xyxy

    array([[139., 768., 315., 954.], [366., 679., 536., 849.]])

    detections.class_id

    array([0, 1])

    detections.data

    {'class_name': array(['cat', 'dog'], dtype='<U10')}

    detections.class_id

    array([0, 1])

    ```

  • Significantly improved the speed of HSV color mapping in sv.HeatMapAnnotator, achieving approximately 28x faster performance on 1920x1080 frames. (#1786)

๐Ÿ”ง Fixed

  • Supervisionโ€™s sv.MeanAveragePrecision is now fully aligned with pycocotools, the official COCO evaluation tool, ensuring accurate and standardized metrics. (#1834)

    ```python import supervision as sv from supervision.metrics import MeanAveragePrecision

    predictions = sv.Detections(...) targets = sv.Detections(...)

    mapmetric = MeanAveragePrecision() mapmetric.update(predictions, targets).compute()

    Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.464

    Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.637

    Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.203

    Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.284

    Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.497

    Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.629

    ```

[!TIP] The updated mAP implementation enabled us to build an updated version of the Computer Vision Model Leaderboard.

imageedit_1_8427510007

  • Fix #1767: Fixed losingย sv.Detections.dataย when detections filtering.

โš ๏ธ Deprecated

โŒ Removed

  • The sv.DetectionDataset.images property has been removed in supervision-0.26.0. Please loop over images with for path, image, annotation in dataset:, as that does not require loading all images into memory.
  • Cconstructing sv.DetectionDataset with parameter images as Dict[str, np.ndarray] is deprecated and has been removed in supervision-0.26.0. Please pass a list of paths List[str] instead.
  • The name sv.BoundingBoxAnnotator is deprecated and has been removed in supervision-0.26.0. It has been renamed to sv.BoxAnnotator.

๐Ÿ† Contributors

@onuralpszr (Onuralp SEZER), @SkalskiP (Piotr Skalski), @SunHao-AI (Hao Sun), @rafaelpadilla Rafael Padilla, @Ashp116 (Ashp116), @capjamesg (James Gallagher), @blakeburch (Blake Burch), @hidara2000 (hidara2000), @Armaggheddon (Alessandro Brunello), @soumik12345 (Soumik Rakshit).

- Python
Published by soumik12345 7 months ago

supervision - supervision-0.25.0

Supervision 0.25.0 is here! Featuring a more robust LineZone crossing counter, support for tracking KeyPoints, Python 3.13 compatibility, and 3 new metrics: Precision, Recall and Mean Average Recall. The update also includes smart label positioning, improved Oriented Bounding Box support, and refined error handling. Thank you to all contributors - especially those who answered the call of Hacktoberfest!

Changelog

๐Ÿš€ Added

  • Essential update to the LineZone: when computing line crossings, detections that jitter might be counted twice (or more!). This can now be solved with the minimum_crossing_threshold argument. If you set it to 2 or more, extra frames will be used to confirm the crossing, improving the accuracy significantly. (#1540)

https://github.com/user-attachments/assets/89ca2ee6-93c9-41e6-a432-e16c4c69c695

```python import numpy as np import supervision as sv from ultralytics import YOLO

model = YOLO("yolov8m-pose.pt") tracker = sv.ByteTrack() trace_annotator = sv.TraceAnnotator()

def callback(frame: np.ndarray, : int) -> np.ndarray: results = model(frame)[0] keypoints = sv.KeyPoints.from_ultralytics(results)

detections = key_points.as_detections()
detections = tracker.update_with_detections(detections)

annotated_image = trace_annotator.annotate(frame.copy(), detections)
return annotated_image

sv.processvideo( sourcepath="inputvideo.mp4", targetpath="output_video.mp4", callback=callback ) ```

https://github.com/user-attachments/assets/4c3bdf54-391e-4633-9164-f15878ddfb33

See the guide for the full code used to make the video

  • Added is_empty method to KeyPoints to check if there are any keypoints in the object. (#1658)

  • Added as_detections method to KeyPoints that converts KeyPoints to Detections. (#1658)

  • Added a new video to supervision[assets]. (#1657)

```python from supervision.assets import download_assets, VideoAssets

pathtovideo = download_assets(VideoAssets.SKIING) ```

  • Supervision can now be used with Python 3.13. The most renowned update is the ability to run Python without Global Interpreter Lock (GIL). We expect support for this among our dependencies to be inconsistent, but if you do attempt it - let us know the results! (#1595)

py3-13

  • Added Mean Average Recall mAR metric, which returns a recall score, averaged over IoU thresholds, detected object classes, and limits imposed on maximum considered detections. (#1661)

```python import supervision as sv from supervision.metrics import MeanAverageRecall

predictions = sv.Detections(...) targets = sv.Detections(...)

mapmetric = MeanAverageRecall() mapresult = map_metric.update(predictions, targets).compute()

map_result.plot() ```

mAR_plot_example

  • Added Precision and Recall metrics, providing a baseline for comparing model outputs to ground truth or another model (#1609)

```python import supervision as sv from supervision.metrics import Recall

predictions = sv.Detections(...) targets = sv.Detections(...)

recallmetric = Recall() recallresult = recall_metric.update(predictions, targets).compute()

recall_result.plot() ```

recall-plot

  • All Metrics now support Oriented Bounding Boxes (OBB) (#1593)

```python import supervision as sv from supervision.metrics import F1_Score

predictions = sv.Detections(...) targets = sv.Detections(...)

f1metric = MeanAverageRecall(metrictarget=sv.MetricTarget.ORIENTEDBOUNDINGBOXES) f1result = f1metric.update(predictions, targets).compute() ```

OBB example

```python import supervision as sv from ultralytics import YOLO

image = cv2.imread("image.jpg")

labelannotator = sv.LabelAnnotator(smartposition=True)

model = YOLO("yolo11m.pt") results = model(image)[0] detections = sv.Detections.from_ultralytics(results)

annotatedframe = labelannotator.annotate(firstframe.copy(), detections) sv.plotimage(annotated_frame) ```

https://github.com/user-attachments/assets/ef768db4-867d-4305-b905-80e690bb1ea7

  • Added the metadata variable to Detections. It allows you to store custom data per-image, rather than per-detected-object as was possible with data variable. For example, metadata could be used to store the source video path, camera model or camera parameters. (#1589)

```python import supervision as sv from ultralytics import YOLO

model = YOLO("yolov8m")

result = model("image.png")[0] detections = sv.Detections.from_ultralytics(result)

Items in data must match length of detections

objectids = [num for num in range(len(detections))] detections.data["objectnumber"] = object_ids

Items in metadata can be of any length.

detections.metadata["camera_model"] = "Luxonis OAK-D" ```

  • Added a py.typed type hints metafile. It should provide a stronger signal to type annotators and IDEs that type support is available. (#1586)

๐ŸŒฑ Changed

  • ByteTrack no longer requires detections to have a class_id (#1637)
  • draw_line, draw_rectangle, draw_filled_rectangle, draw_polygon, draw_filled_polygon and PolygonZoneAnnotator now comes with a default color (#1591)
  • Dataset classes are treated as case-sensitive when merging multiple datasets. (#1643)
  • Expanded metrics documentation with example plots and printed results (#1660)
  • Added usage example for polygon zone (#1608)
  • Small improvements to error handling in polygons: (#1602)

๐Ÿ”ง Fixed

  • Updated ByteTrack, removing shared variables. Previously, multiple instances of ByteTrack would share some date, requiring liberal use of tracker.reset(). (#1603), (#1528)
  • Fixed a bug where class_agnostic setting in MeanAveragePrecision would not work. (#1577) hacktoberfest
  • Removed welcome workflow from our CI system. (#1596)

โœ… No removals or deprecations this time!

โš™๏ธ Internal Changes

  • Large refactor of ByteTrack (#1603)
    • STrack moved to separate class
    • Remove superfluous BaseTrack class
    • Removed unused variables
  • Large refactor of RichLabelAnnotator, matching its contents with LabelAnnotator. (#1625)

๐Ÿ† Contributors

@onuralpszr (Onuralp SEZER), @kshitijaucharmal (KshitijAucharmal), @grzegorz-roboflow (Grzegorz Klimaszewski), @Kadermiyanyedi (Kader Miyanyedi), @PrakharJain1509 (Prakhar Jain), @DivyaVijay1234 (Divya Vijay), @souhhmm (Soham Kalburgi), @joaomarcoscrs (Joรฃo Marcos Cardoso Ramos da Silva), @AHuzail (Ahmad Huzail Khan), @DemyCode (DemyCode), @ablazejuk (Andrey Blazejuk), @LinasKo (Linas Kondrackis)

A special thanks goes out to everyone who joined us for Hacktoberfest! We hope it was a rewarding experience and look forward to seeing you continue contributing and growing with our community. Keep building, keep innovatingโ€”your efforts make a difference! ๐Ÿš€

- Python
Published by LinasKo over 1 year ago

supervision - supervision-0.24.0

Supervision 0.24.0 is here! We've added many new changes, including the F1 score, enhancements to LineZone, EasyOCR support, NCNN support, and the best Cookbook to date! You can also try out our annotators directly in the browser. Check out the release notes to find out more!

๐Ÿ“ข Announcements

image-1

  • Supervision is celebrating Hacktoberfest! Whether you're a newcomer to open source or a veteran contributor, we welcome you to join us in improving supervision. You can grab any issue without an assigned contributor: Hacktoberfest Issues Board. We'll be adding many more issues next week! ๐ŸŽ‰

  • We recently launched the Model Leaderboard. Come check how the latest models perform! It is also open-source, so you can contribute to it as well! ๐Ÿš€

Changelog

๐Ÿš€ Added

  • Added F1 score as a new metric for detection and segmentation. The F1 score balances precision and recall, providing a single metric for model evaluation. #1521

```python import supervision as sv from supervision.metrics import F1Score

predictions = sv.Detections(...) targets = sv.Detections(...)

f1metric = F1Score() f1result = f1_metric.update(predictions, targets).compute()

print(f1result) print(f1result.f150) print(f1result.smallobjects.f150) ``` image-8-with-new

SAHI principle Inference Slicer in action

  • You can now try supervision annotators on your own images. Check out the annotator docs. The preview is powered by an Embedded Workflow. Thank you @joaomarcoscrs! #1533

Embedded workflow example

  • Enhanced LineZoneAnnotator, allowing the labels to align with the line, even when it's not horizontal. Also, you can now disable text background, and choose to draw labels off-center which minimizes overlaps for multiple LineZone labels. Thank you @jcruz-ferreyra! #854

```python import supervision as sv import cv2

image = cv2.imread("")

linezone = sv.LineZone( start=sv.Point(0, 100), end=sv.Point(50, 200) ) linezoneannotator = sv.LineZoneAnnotator( textorienttoline=True, displaytextbox=False, text_centered=False )

annotatedframe = linezoneannotator.annotate( frame=image.copy(), linecounter=line_zone )

sv.plot_image(frame) ```

https://github.com/user-attachments/assets/d7694b81-26ca-4236-bc66-af3d9e79d367

  • Added per-class counting capabilities to LineZone and introduced LineZoneAnnotatorMulticlass for visualizing the counts per class. This feature allows tracking of individual classes crossing a line, enhancing the flexibility of use cases like traffic monitoring or crowd analysis. #1555

```python import supervision as sv import cv2

image = cv2.imread("")

linezone = sv.LineZone( start=sv.Point(0, 100), end=sv.Point(50, 200) ) linezone_annotator = sv.LineZoneAnnotatorMulticlass()

frame = linezoneannotator.annotate( frame=frame, linezones=[linezone] )

sv.plot_image(frame) ```

https://github.com/user-attachments/assets/b109f5bd-6ae7-473b-b4e8-910a869736b4

  • Added from_easyocr, allowing integration of OCR results into the supervision framework. EasyOCR is an open-source optical character recognition (OCR) library that can read text from images. Thank you @onuralpszr! #1515

```python import supervision as sv import easyocr import cv2

image = cv2.imread("")

reader = easyocr.Reader(["en"]) result = reader.readtext("", paragraph=True) detections = sv.Detections.from_easyocr(result)

boxannotator = sv.BoxAnnotator(colorlookup=sv.ColorLookup.INDEX) labelannotator = sv.LabelAnnotator(colorlookup=sv.ColorLookup.INDEX)

annotatedimage = image.copy() annotatedimage = boxannotator.annotate(scene=annotatedimage, detections=detections) annotatedimage = labelannotator.annotate(scene=annotated_image, detections=detections)

sv.plotimage(annotatedimage) ```

EasyOCR example

  • Added oriented_box_iou_batch function to detection.utils. This function computes Intersection over Union (IoU) for oriented or rotated bounding boxes (OBB), making it easier to evaluate detections with non-axis-aligned boxes. Thank you @patel-zeel! #1502

```python import numpy as np

boxestrue = np.array([[[1, 0], [0, 1], [3, 4], [4, 3]]]) boxesdetection = np.array([[[1, 1], [2, 0], [4, 2], [3, 3]]]) ious = sv.orientedboxioubatch(boxestrue, boxes_detection) print("IoU between true and detected boxes:", ious) ```

Note: the IoU is approximated as mask IoU. Approximated OBB overlap

  • Extended PolygonZoneAnnotator to allow setting opacity when drawing zones, providing enhanced visualization by filling the zone with adjustable transparency. Thank you @grzegorz-roboflow! #1527

  • Added from_ncnn, a connector for the NCNN. It is a powerful object detection framework from Tencent, written from ground-up in C++, with no third party dependencies. Thank you @onuralpszr! #1524

```python import cv2 from ncnn.modelzoo import getmodel import supervision as sv

image = cv2.imread("") model = getmodel( "yolov8s", targetsize=640, probthreshold=0.5, nmsthreshold=0.45, numthreads=4, usegpu=True, ) result = model(image) detections = sv.Detections.from_ncnn(result) ```

๐ŸŒฑ Changed

  • Supervision now depends on opencv-python rather than opencv-python-headless. #1530

  • Fixed broken or outdated links in documentation and notebooks, improving navigation and ensuring accuracy of references. Thanks to @capjamesg for identifying these issues. #1523

  • Enabled and fixed Ruff rules for code formatting, including changes like avoiding unnecessary iterable allocations and using Optional for default mutable arguments. #1526

๐Ÿ”ง Fixed

  • Updated the COCO 101 point Average Precision algorithm to correctly interpolate precision, providing a more precise calculation of average precision without averaging out intermediate values. #1500

  • Resolved miscellaneous issues highlighted when building documentation. This mostly includes whitespace adjustments and type inconsistencies. Updated documentation for clarity and fixed formatting issues. Added explicit version for mkdocstrings-python. #1549

  • Clarified documentation around the overlap_ratio_wh argument deprecation in InferenceSlicer. #1547

โœ… No deprecations this time!

โŒ Removed

  • The frame_resolution_wh parameter in PolygonZone has been removed due to deprecation.
  • Supervision installation methods "headless" and "desktop" removed, as they are no longer needed. pip install supervision[headless] will install the base library and warn of non-existent extra.

๐Ÿ† Contributors

@onuralpszr (Onuralp SEZER), @joaomarcoscrs (Joรฃo Marcos Cardoso Ramos da Silva), @jcruz-ferreyra (Juan Cruz), @patel-zeel (Zeel B Patel), @grzegorz-roboflow (Grzegorz Klimaszewski), @Kadermiyanyedi (Kader Miyanyedi), @ediardo (Eddie Ramirez), @CharlesCNorton, @ethanwhite (Ethan White), @josephofiowa (Joseph Nelson), @tibeoh (Thibault Itart-Longueville), @SkalskiP (Piotr Skalski), @LinasKo (Linas Kondrackis)

Thank you to Pexels for providing fantastic images and videos!

- Python
Published by LinasKo over 1 year ago

supervision - supervision-0.23.0

๐Ÿš€ Added

https://github.com/user-attachments/assets/c1f3ce11-08c1-4648-9176-4e7920b91a8a

(video by Pexels)

  • We're introducing metrics, which currently supports xyxy boxes and masks. Over the next few releases, supervision will focus on adding more metrics, allowing you to evaluate your model performance. We plan to support not just boxes, masks, but oriented bounding boxes as well! #1442

[!TIP] Help in implementing metrics is very welcome! Keep an eye on our issue board if you'd like to contribute!

```python import supervision as sv from supervision.metrics import MeanAveragePrecision

predictions = sv.Detections(...) targets = sv.Detections(...)

mapmetric = MeanAveragePrecision() mapresult = map_metric.update(predictions, targets).compute()

print(mapresult) print(mapresult.map5095) print(mapresult.largeobjects.map5095) map_result.plot() ```

Here's a very basic way to compare model results:

๐Ÿ“Š Example code ```python import supervision as sv from supervision.metrics import MeanAveragePrecision from inference import get_model import matplotlib.pyplot as plt # !wget https://media.roboflow.com/notebooks/examples/dog.jpeg image = "dog.jpeg" model_1 = get_model("yolov8n-640") model_2 = get_model("yolov8s-640") model_3 = get_model("yolov8m-640") model_4 = get_model("yolov8l-640") results_1 = model_1.infer(image)[0] results_2 = model_2.infer(image)[0] results_3 = model_3.infer(image)[0] results_4 = model_4.infer(image)[0] detections_1 = sv.Detections.from_inference(results_1) detections_2 = sv.Detections.from_inference(results_2) detections_3 = sv.Detections.from_inference(results_3) detections_4 = sv.Detections.from_inference(results_4) map_n_metric = MeanAveragePrecision().update([detections_1], [detections_4]).compute() map_s_metric = MeanAveragePrecision().update([detections_2], [detections_4]).compute() map_m_metric = MeanAveragePrecision().update([detections_3], [detections_4]).compute() labels = ["YOLOv8n", "YOLOv8s", "YOLOv8m"] map_values = [map_n_metric.map50_95, map_s_metric.map50_95, map_m_metric.map50_95] plt.title("YOLOv8 Model Comparison") plt.bar(labels, map_values) ax = plt.gca() ax.set_ylim([0, 1]) plt.show() ```

mini-benchmark

  • Added the IconAnnotator, which allows you to place icons on your images. #930

https://github.com/user-attachments/assets/ff80acf5-67f2-4c20-a3fe-b63cac07ae31

(Video by Pexels, icons by Icons8)

```python import supervision as sv from inference import get_model

image = icondog = <DOGPNGPATH> iconcat =

model = getmodel(modelid="yolov8n-640") results = model.infer(image)[0] detections = sv.Detections.from_inference(results)

iconpaths = [] for classname in detections.data["classname"]: if classname == "dog": iconpaths.append(icondog) elif classname == "cat": iconpaths.append(iconcat) else: iconpaths.append("")

iconannotator = sv.IconAnnotator() annotatedframe = iconannotator.annotate( scene=image.copy(), detections=detections, iconpath=icon_paths ) ```

  • Segment Anything 2 was released this month. And while you can load its results via from_sam, we've added support to from_ultralytics for loading the results if you ran it with Ultralytics. #1354

```python import cv2 import supervision as sv from ultralytics import SAM

image = cv2.imread("...")

model = SAM("mobilesam.pt") results = model(image, bboxes=[[588, 163, 643, 220]]) detections = sv.Detections.fromultralytics(results[0])

polygonannotator = sv.PolygonAnnotator() maskannotator = sv.MaskAnnotator()

annoatedimage = maskannotator.annotate(image.copy(), detections) annoatedimage = polygonannotator.annotate(annoated_image, detections)

sv.plotimage(annoatedimage, (12,12)) ```

SAM2 with our annotators:

https://github.com/user-attachments/assets/6a98d651-2596-43e9-b485-ea6f0de4fffa

๐ŸŒฑ Changed

  • Updated sv.Detections.from_transformers to support the transformers v5 functions. This includes the DetrImageProcessor methods post_process_object_detection, post_process_panoptic_segmentation, post_process_semantic_segmentation, and post_process_instance_segmentation. #1386
  • InferenceSlicer now features an overlap_ratio_wh parameter, making it easier to compute slice sizes when handling overlapping slices. #1434

```python imagewithsmallobjects = cv2.imread("...") model = getmodel("yolov8n-640")

def callback(imageslice: np.ndarray) -> sv.Detections: print("imageslice.shape:", imageslice.shape) result = model.infer(imageslice)[0] return sv.Detections.from_inference(result)

slicer = sv.InferenceSlicer( callback=callback, slicewh=(128, 128), overlapratio_wh=(0.2, 0.2), )

detections = slicer(imagewithsmall_objects) ```

๐Ÿ› ๏ธ Fixed

  • Annotator type fixes #1448
  • New way of seeking to a specific video frame, where other methods don't work #1348
  • plot_image now clearly states the size is in inches. #1424

โš ๏ธ Deprecated

  • overlap_filter_strategy in InferenceSlicer.__init__ is deprecated and will be removed in supervision-0.27.0. Use overlap_strategy instead.
  • overlap_ratio_wh in InferenceSlicer.__init__ is deprecated and will be removed in supervision-0.27.0. Use overlap_wh instead.

โŒ Removed

  • The track_buffer, track_thresh, and match_thresh parameters in ByteTrack are deprecated and were removed as of supervision-0.23.0. Use lost_track_buffer, track_activation_threshold, and minimum_matching_threshold instead.
  • The triggering_position parameter in sv.PolygonZone was removed as of supervision-0.23.0. Use triggering_anchors instead.

๐Ÿ† Contributors

@shaddu, @onuralpszr (Onuralp SEZER), @Kadermiyanyedi (Kader Miyanyedi), @xaristeidou (Christoforos Aristeidou), @Gk-rohan (Rohan Gupta), @Bhavay-2001 (Bhavay Malhotra), @arthurcerveira (Arthur Cerveira), @J4BEZ (Ju Hoon Park), @venkatram-dev, @eric220, @capjamesg (James), @yeldarby (Brad Dwyer), @SkalskiP (Piotr Skalski), @LinasKo (LinasKo)

- Python
Published by LinasKo over 1 year ago

supervision - supervision-0.22.0

๐Ÿš€ Added

supervision cheatsheet

```python import numpy as np import mediapipe as mp import supervision as sv from PIL import Image

model = mp.solutions.face_mesh.FaceMesh()

edge_annotator = sv.EdgeAnnotator(color=sv.Color.BLACK, thickness=2)

image = Image.open().convert('RGB') results = model.process(np.array(image)) keypoints = sv.KeyPoints.frommediapipe(results, resolution_wh=image.size)

annotatedimage = edgeannotator.annotate(scene=image, keypoints=keypoints) ```

https://github.com/user-attachments/assets/883a6bcc-5e39-41b0-9b6d-0348b5b2fe0e

https://github.com/user-attachments/assets/de60eeb4-1259-421b-af66-f622a15988ea

๐ŸŒฑ Changed

```python import roboflow from roboflow import Roboflow import supervision as sv

roboflow.login() rf = Roboflow()

project = rf.workspace().project() dataset = project.version().download("coco")

dstrain = sv.DetectionDataset.fromcoco( imagesdirectorypath=f"{dataset.location}/train", annotationspath=f"{dataset.location}/train/annotations.coco.json", )

path, image, annotation = ds_train[0] # loads image on demand

for path, image, annotation in ds_train: # loads image on demand ```

florence-2-result

๐Ÿ› ๏ธ Fixed

๐Ÿง‘โ€๐Ÿณ Cookbooks

This release, @onuralpszr added two new Cookbooks to our collection. Check them out to learn how to save Detections to a file and convert it back to Detections!

๐Ÿ† Contributors

@onuralpszr (Onuralp SEZER), @David-rn (David Redรณ), @jeslinpjames (Jeslin P James), @Bhavay-2001 (Bhavay Malhotra), @hardikdava (Hardik Dava), @kirilman, @dsaha21 (Dripto Saha), @cdragos (Dragos Catarahia), @mqasim41 (Muhammad Qasim), @SkalskiP (Piotr Skalski), @LinasKo (Linas Kondrackis)

Special thanks to @rolson24 (Raif Olson) for helping the community with ByteTrack!

- Python
Published by LinasKo over 1 year ago

supervision - supervision-0.21.0

๐Ÿ“… Timeline

The supervision-0.21.0 release is around the corner. Here is the timeline:

  • 5 Jun 2024 08:00 PM CEST (UTC +2) / 5 Jun 2024 11:00 AM PDT (UTC -7) - merge develop into main - closing list supervision-0.21.0 features
  • 6 Jun 2024 11:00 AM CEST (UTC +2) / 6 Jun 2024 02:00 AM PDT (UTC -7) - release supervision-0.21.0

๐Ÿชต Changelog

๐Ÿš€ Added

non-max-merging

```python import supervision as sv

paligemmaresult = " cat" detections = sv.Detections.fromlmm( sv.LMM.PALIGEMMA, paligemmaresult, resolutionwh=(1000, 1000), classes=['cat', 'dog'] ) detections.xyxy

array([[250., 250., 750., 750.]])

detections.class_id

array([0])

```

```python import supervision as sv

image = ... key_points = sv.KeyPoints(...)

LABELS = [ "nose", "left eye", "right eye", "left ear", "right ear", "left shoulder", "right shoulder", "left elbow", "right elbow", "left wrist", "right wrist", "left hip", "right hip", "left knee", "right knee", "left ankle", "right ankle" ]

COLORS = [ "#FF6347", "#FF6347", "#FF6347", "#FF6347", "#FF6347", "#FF1493", "#00FF00", "#FF1493", "#00FF00", "#FF1493", "#00FF00", "#FFD700", "#00BFFF", "#FFD700", "#00BFFF", "#FFD700", "#00BFFF" ] COLORS = [sv.Color.fromhex(colorhex=c) for c in COLORS]

vertexlabelannotator = sv.VertexLabelAnnotator( color=COLORS, textcolor=sv.Color.BLACK, borderradius=5 ) annotatedframe = vertexlabelannotator.annotate( scene=image.copy(), keypoints=key_points, labels=labels ) ```

vertex-label-annotator-custom-example (1)

mask-to-rle (1)

๐ŸŒฑ Changed

```python import cv2 import numpy as np import supervision as sv from inference import get_model

model = getmodel(modelid="yolov8x-seg-640") image = cv2.imread()

def callback(imageslice: np.ndarray) -> sv.Detections: results = model.infer(imageslice)[0] return sv.Detections.from_inference(results)

slicer = sv.InferenceSlicer(callback = callback) detections = slicer(image)

maskannotator = sv.MaskAnnotator() labelannotator = sv.LabelAnnotator()

annotatedimage = maskannotator.annotate( scene=image, detections=detections) annotatedimage = labelannotator.annotate( scene=annotated_image, detections=detections) ```

inference-slicer-segmentation-example

output

๐Ÿ† Contributors

@onuralpszr (Onuralp SEZER), @LinasKo (Linas Kondrackis), @rolson24 (Raif Olson), @mario-dg (Mario da Graca), @xaristeidou (Christoforos Aristeidou), @ManzarIMalik (Manzar Iqbal Malik), @tc360950 (Tomasz Cฤ…kaล‚a), @emSko, @SkalskiP (Piotr Skalski)

- Python
Published by SkalskiP over 1 year ago

supervision - supervision-0.20.0

๐Ÿš€ Added

```python import cv2 import supervision as sv from ultralytics import YOLO

image = cv2.imread() model = YOLO('yolov8l-pose')

result = model(image, verbose=False)[0] keypoints = sv.KeyPoints.from_ultralytics(result)

edgeannotators = sv.EdgeAnnotator(color=sv.Color.GREEN, thickness=5) annotatedimage = edge_annotators.annotate(image.copy(), keypoints) ```

edge-annotator-example

```python import cv2 import supervision as sv from ultralytics import YOLO

image = cv2.imread() model = YOLO('yolov8l-pose')

result = model(image, verbose=False)[0] keypoints = sv.KeyPoints.from_ultralytics(result)

vertexannotators = sv.VertexAnnotator(color=sv.Color.GREEN, radius=10) annotatedimage = vertex_annotators.annotate(image.copy(), keypoints) ```

vertex-annotator-example

๐ŸŒฑ Changed

  • sv.LabelAnnotator by adding an additional corner_radius argument that allows for rounding the corners of the bounding box. (#1037)

  • sv.PolygonZone such that the frame_resolution_wh argument is no longer required to initialize sv.PolygonZone. (#1109)

[!WARNING]
The frame_resolution_wh parameter in sv.PolygonZone is deprecated and will be removed in supervision-0.24.0.

```python import torch import supervision as sv from PIL import Image from transformers import DetrImageProcessor, DetrForSegmentation

processor = DetrImageProcessor.frompretrained("facebook/detr-resnet-50-panoptic") model = DetrForSegmentation.frompretrained("facebook/detr-resnet-50-panoptic")

image = Image.open() inputs = processor(images=image, return_tensors="pt")

with torch.no_grad(): outputs = model(**inputs)

width, height = image.size targetsize = torch.tensor([[height, width]]) results = processor.postprocesssegmentation( outputs=outputs, targetsizes=targetsize)[0] detections = sv.Detections.fromtransformers(results, id2label=model.config.id2label)

maskannotator = sv.MaskAnnotator() labelannotator = sv.LabelAnnotator(text_position=sv.Position.CENTER)

annotatedimage = maskannotator.annotate( scene=image, detections=detections) annotatedimage = labelannotator.annotate( scene=annotated_image, detections=detections) ```

๐Ÿ› ๏ธ Fixed

๐Ÿ† Contributors

@onuralpszr (Onuralp SEZER), @rolson24 (Raif Olson), @xaristeidou (Christoforos Aristeidou), @jeslinpjames (Jeslin P James), @Griffin-Sullivan (Griffin Sullivan), @PawelPeczek-Roboflow (Paweล‚ Pฤ™czek), @pirnerjonas (Jonas Pirner), @sharingan000, @macc-n, @LinasKo (Linas Kondrackis), @SkalskiP (Piotr Skalski)

- Python
Published by SkalskiP almost 2 years ago

supervision - supervision-0.19.0

๐Ÿง‘โ€๐Ÿณ Cookbooks

Supervision Cookbooks - A curated open-source collection crafted by the community, offering practical examples, comprehensive guides, and walkthroughs for leveraging Supervision alongside diverse Computer Vision models. (#860)

๐Ÿš€ Added

  • sv.CSVSink allowing for the straightforward saving of image, video, or stream inference results in a .csv file. (#818)

```python import supervision as sv from ultralytics import YOLO

model = YOLO() csvsink = sv.CSVSink(<RESULTCSVFILEPATH>) framesgenerator = sv.getvideoframesgenerator()

with csvsink: for frame in framesgenerator: result = model(frame)[0] detections = sv.Detections.fromultralytics(result) csvsink.append(detections, customdata={<CUSTOMLABEL>:}) ```

https://github.com/roboflow/supervision/assets/26109316/621588f9-69a0-44fe-8aab-ab4b0ef2ea1b

  • sv.JSONSink allowing for the straightforward saving of image, video, or stream inference results in a .json file. (#819)

```python import supervision as sv from ultralytics import YOLO

model = YOLO() jsonsink = sv.JSONSink(<RESULTJSONFILEPATH>) framesgenerator = sv.getvideoframesgenerator()

with jsonsink: for frame in framesgenerator: result = model(frame)[0] detections = sv.Detections.fromultralytics(result) jsonsink.append(detections, customdata={<CUSTOMLABEL>:}) ```

```python import cv2 import supervision as sv from inference import get_model

image = cv2.imread() model = getmodel(modelid="yolov8n-640")

result = model.infer(image)[0] detections = sv.Detections.from_inference(result)

cropannotator = sv.CropAnnotator() annotatedframe = crop_annotator.annotate( scene=image.copy(), detections=detections ) ```

https://github.com/roboflow/supervision/assets/26109316/0a5b67ce-55e7-4e26-9495-a68f9ad97ec7

๐ŸŒฑ Changed

  • sv.ByteTrack.reset allowing users to clear trackers state, enabling the processing of multiple video files in sequence. (#827)
  • sv.LineZoneAnnotator allowing to hide in/out count using display_in_count and display_out_count properties. (#802)
  • sv.ByteTrack input arguments and docstrings updated to improve readability and ease of use. (#787)

[!WARNING]
The track_buffer, track_thresh, and match_thresh parameters in sv.ByterTrack are deprecated and will be removed in supervision-0.23.0. Use lost_track_buffer, track_activation_threshold, and minimum_matching_threshold instead.

  • sv.PolygonZone to now accept a list of specific box anchors that must be in zone for a detection to be counted. (#910)

[!WARNING]
The triggering_position parameter in sv.PolygonZone is deprecated and will be removed in supervision-0.23.0. Use triggering_anchors instead.

  • Annotators adding support for Pillow images. All supervision Annotators can now accept an image as either a numpy array or a Pillow Image. They automatically detect its type, draw annotations, and return the output in the same format as the input. (#875)

๐Ÿ› ๏ธ Fixed

๐Ÿ† Contributors

@onuralpszr (Onuralp SEZER), @LinasKo (Linas Kondrackis), @LeviVasconcelos (Levi Vasconcelos), @AdonaiVera (Adonai Vera), @xaristeidou (Christoforos Aristeidou), @Kadermiyanyedi (Kader Miyanyedi), @NickHerrig (Nick Herrig), @PacificDou (Shuyang Dou), @iamhatesz (Tomasz Wrona), @capjamesg (James Gallagher), @sansyo, @SkalskiP (Piotr Skalski)

- Python
Published by SkalskiP almost 2 years ago

supervision - supervision-0.18.0

๐Ÿš€ Added

  • sv.PercentageBarAnnotator allowing to annotate images and videos with percentage values representing confidence or other custom property. (#720)

```python import supervision as sv

image = ... detections = sv.Detections(...)

percentagebarannotator = sv.PercentageBarAnnotator() annotatedframe = percentagebar_annotator.annotate( scene=image.copy(), detections=detections ) ```

percentage-bar-annotator-example-purple

https://github.com/roboflow/supervision/assets/26109316/4dd703ad-ffba-492b-97ff-1be84e237e83

```python import cv2 import supervision as sv from ultralytics import YOLO

image = cv2.imread() model = YOLO("yolov8n-obb.pt")

result = model(image)[0] detections = sv.Detections.from_ultralytics(result)

orientedboxannotator = sv.OrientedBoxAnnotator() annotatedframe = orientedbox_annotator.annotate( scene=image.copy(), detections=detections ) ```

oriented-box-annotator

```python import supervision as sv

sv.ColorPalette.from_matplotlib('viridis', 5)

ColorPalette(colors=[Color(r=68, g=1, b=84), Color(r=59, g=82, b=139), ...])

```

visualized_color_palette

๐ŸŒฑ Changed

  • sv.Detections.from_ultralytics adding support for OBB (Oriented Bounding Boxes). (#770)
  • sv.LineZone to now accept a list of specific box anchors that must cross the line for a detection to be counted. This update marks a significant improvement from the previous requirement, where all four box corners were necessary. Users can now specify a single anchor, such as sv.Position.BOTTOM_CENTER, or any other combination of anchors defined as List[sv.Position]. (#735)
  • sv.Detections to support custom payload. (#700)
  • sv.Color's and sv.ColorPalette's method of accessing predefined colors, transitioning from a function-based approach (sv.Color.red()) to a more intuitive and conventional property-based method (sv.Color.RED). (#756) (#769)

[!WARNING]
sv.ColorPalette.default() is deprecated and will be removed in supervision-0.21.0. Use sv.ColorPalette.DEFAULT instead.

default-color-palette

[!WARNING]
Detections.from_roboflow() is deprecated and will be removed in supervision-0.21.0. Use Detections.from_inference instead.

```python import cv2 import supervision as sv from inference.models.utils import getroboflowmodel

image = cv2.imread() model = getroboflowmodel(model_id="yolov8s-640")

result = model.infer(image)[0] detections = sv.Detections.from_inference(result) ```

๐Ÿ› ๏ธ Fixed

  • sv.LineZone functionality to accurately update the counter when an object crosses a line from any direction, including from the side. This enhancement enables more precise tracking and analytics, such as calculating individual in/out counts for each lane on the road. (#735)

https://github.com/roboflow/supervision/assets/26109316/412c4d9c-b228-4bcc-a4c7-e6a0c8f2da6e

๐Ÿ† Contributors

@onuralpszr (Onuralp SEZER), @HinePo (Rafael Levy), @xaristeidou (Christoforos Aristeidou), @revtheundead (Utku ร–zbek), @paulguerrie (Paul Guerrie), @yeldarby (Brad Dwyer), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)

- Python
Published by SkalskiP about 2 years ago

supervision - supervision-0.17.1

๐Ÿš€ Added

  • Support for Python 3.12.

๐Ÿ† Contributors

@onuralpszr (Onuralp SEZER), @SkalskiP (Piotr Skalski)

- Python
Published by SkalskiP about 2 years ago

supervision - supervision-0.17.0

๐Ÿš€ Added

https://github.com/roboflow/supervision/assets/26109316/c2d4b3b1-fd19-44bb-94ec-f21b28dfd05f

  • sv.TriangleAnnotator allowing to annotate images and videos with triangle markers. (#652)

  • sv.PolygonAnnotator allowing to annotate images and videos with segmentation mask outline. (#602)

    ```python

    import supervision as sv

    image = ... detections = sv.Detections(...)

    polygonannotator = sv.PolygonAnnotator() annotatedframe = polygon_annotator.annotate( ... scene=image.copy(), ... detections=detections ... ) ```

https://github.com/roboflow/supervision/assets/26109316/c9236bf7-6ba4-4799-bf2a-b5532ad3591b

๐ŸŒฑ Changed

mask_annotator_speed

๐Ÿ› ๏ธ Fixed

๐Ÿ† Contributors

@onuralpszr (Onuralp SEZER), @hugoles (Hugo Dutra), @karanjakhar (Karan Jakhar), @kim-jeonghyun (Jeonghyun Kim), @fdloopes ( Felipe Lopes), @abhishek7kalra (Abhishek Kalra), @SummitStudiosDev, @xenteros @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)

- Python
Published by SkalskiP about 2 years ago

supervision - supervision-0.16.0

๐Ÿš€ Added

https://github.com/roboflow/supervision/assets/26109316/691e219c-0565-4403-9218-ab5644f39bce

```python

import supervision as sv

image = ... detections = sv.Detections(...)

haloannotator = sv.HaloAnnotator() annotatedframe = halo_annotator.annotate( ... scene=image.copy(), ... detections=detections ... ) ```

๐ŸŒฑ Changed

  • sv.LineZone.trigger now return Tuple[np.ndarray, np.ndarray]. The first array indicates which detections have crossed the line from outside to inside. The second array indicates which detections have crossed the line from inside to outside. (#482)
  • Annotator argument name from color_map: str to color_lookup: ColorLookup enum to increase type safety. (#465)
  • sv.MaskAnnotator allowing 2x faster annotation. (#426)

๐Ÿ› ๏ธ Fixed

  • Poetry env definition allowing proper local installation. (#477)
  • sv.ByteTrack to return np.array([], dtype=int) when svDetections is empty. (#430)
  • YOLONAS detection missing predication part added & fixed (#416)
  • SAM detection at Demo Notebook MaskAnnotator(color_map="index") color_map set to index (#416)

๐Ÿ—‘๏ธ Deleted

Warning Deleted sv.Detections.from_yolov8 and sv.Classifications.from_yolov8 as those are now replaced by sv.Detections.from_ultralytics and sv.Classifications.from_ultralytics. (#438)

๐Ÿ† Contributors

@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @kapter, @keshav278 (Keshav Subramanian), @akashpambhar (Akash Pambhar), @AntonioConsiglio (Antonio Consiglio), @ashishdatta, @mario-dg (Mario da Graca), @ jayaBalaR (JAYABALAMBIKA.R), @abhishek7kalra (Abhishek Kalra), @PankajKrana (Pankaj Kumar Rana), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)

- Python
Published by SkalskiP over 2 years ago

supervision - supervision-0.15.0

๐Ÿš€ Added

https://github.com/roboflow/supervision/assets/26109316/4d6c4a70-b40e-48fc-9e58-23b7e67bf94a

```python

import supervision as sv

image = ... detections = sv.Detections(...)

boundingboxannotator = sv.BoundingBoxAnnotator() annotatedframe = boundingbox_annotator.annotate( ... scene=image.copy(), ... detections=detections ... ) ```

  • Supervision usage example. You can now learn how to perform traffic flow analysis with Supervision. (#354)

https://github.com/roboflow/supervision/assets/26109316/c9436828-9fbf-4c25-ae8c-60e9c81b3900

๐ŸŒฑ Changed

๐Ÿ› ๏ธ Fixed

๐Ÿ† Contributors

@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @Killua7362 (Akshay Bhat), @fcakyon (Fatih C. Akyon), @akashAD98 (Akash A Desai), @Rajarshi-Misra, @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)

- Python
Published by SkalskiP over 2 years ago

supervision - 0.14.0

๐Ÿš€ Added

```python

import cv2 import supervision as sv import numpy as np from ultralytics import YOLO

image = cv2.imread(SOURCEIMAGEPATH) model = YOLO(...)

def callback(imageslice: np.ndarray) -> sv.Detections: ... result = model(imageslice)[0] ... return sv.Detections.from_ultralytics(result)

slicer = sv.InferenceSlicer(callback = callback)

detections = slicer(image) ```

https://github.com/roboflow/supervision/assets/26109316/da665575-4d74-469c-a1f7-a43b7ee7e214

https://github.com/roboflow/supervision/assets/26109316/d8128440-6bd7-491a-8c7d-519254b76ec5

๐ŸŒฑ Changed

๐Ÿ› ๏ธ Fixed

๐Ÿ† Contributors

@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @mayankagarwals (Mayank Agarwal), @rizavelioglu (Riza Velioglu), @arjun-234 (Arjun D.), @mwitiderrick (Derrick Mwiti), @ShubhamKanitkar32, @gasparitiago (Tiago De Gaspari), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)

- Python
Published by capjamesg over 2 years ago

supervision - supervision-0.13.0

๐Ÿš€ Added

```python

import supervision as sv from ultralytics import YOLO

dataset = sv.DetectionDataset.from_yolo(...)

model = YOLO(...) def callback(image: np.ndarray) -> sv.Detections: ... result = model(image)[0] ... return sv.Detections.from_yolov8(result)

meanaverageprecision = sv.MeanAveragePrecision.benchmark( ... dataset = dataset, ... callback = callback ... )

meanaverageprecision.map50_95 0.433 ```

```python

import supervision as sv from ultralytics import YOLO

model = YOLO(...) byte_tracker = sv.ByteTrack() annotator = sv.BoxAnnotator()

def callback(frame: np.ndarray, index: int) -> np.ndarray: ... results = model(frame)[0] ... detections = sv.Detections.fromyolov8(results) ... detections = bytetracker.updatefromdetections(detections=detections) ... labels = [ ... f"#{trackerid} {model.model.names[classid]} {confidence:0.2f}" ... for , _, confidence, classid, tracker_id ... in detections ... ] ... return annotator.annotate(scene=frame.copy(), detections=detections, labels=labels)

sv.processvideo( ... sourcepath='...', ... target_path='...', ... callback=callback ... ) ```

https://github.com/roboflow/supervision/assets/26109316/d5d393f5-e577-474a-bc8c-82483ef8a578

๐Ÿ† Contributors

@hardikdava (Hardik Dava), @kirilllzaitsev (Kirill Zaitsev), @onuralpszr (Onuralp SEZER), @dbroboflow, @mayankagarwals (Mayank Agarwal), @danigarciaoca (Daniel M. Garcรญa-Ocaรฑa), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)

- Python
Published by capjamesg over 2 years ago

supervision - supervision-0.12.0

Warning With the supervision-0.12.0 release, we are terminating official support for Python 3.7. (#179)

๐Ÿš€ Added

```python

import supervision as sv from ultralytics import YOLO

dataset = sv.DetectionDataset.from_yolo(...)

model = YOLO(...) def callback(image: np.ndarray) -> sv.Detections: ... result = model(image)[0] ... return sv.Detections.from_yolov8(result)

confusion_matrix = sv.ConfusionMatrix.benchmark( ... dataset = dataset, ... callback = callback ... )

confusion_matrix.matrix array([ [0., 0., 0., 0.], [0., 1., 0., 1.], [0., 1., 1., 0.], [1., 1., 0., 0.] ]) ```

Snap (51)

๐ŸŒฑ Changed

  • Packing method from setup.py to pyproject.toml. (#180)

๐Ÿ› ๏ธ Fixed

๐Ÿ† Contributors

@kirilllzaitsev @hardikdava @onuralpszr @Ucag @SkalskiP @capjamesg

- Python
Published by capjamesg over 2 years ago

supervision - supervision-0.11.1

๐Ÿ› ๏ธ Fixed

๐Ÿ† Contributors

@capjamesg @SkalskiP

- Python
Published by SkalskiP over 2 years ago

supervision - supervision-0.11.0

๐Ÿš€ Added

```python

import supervision as sv

ds = sv.DetectionDataset.fromcoco( ... imagesdirectorypath='...', ... annotationspath='...' ... )

ds.ascoco( ... imagesdirectorypath='...', ... annotationspath='...' ... ) ```

  • Ability to marge multiple sv.DetectionDataset together using merge method. (https://github.com/roboflow/supervision/pull/158)

```python

import supervision as sv

ds1 = sv.DetectionDataset(...) len(ds1) 100 ds_1.classes ['dog', 'person']

ds2 = sv.DetectionDataset(...) len(ds2) 200 ds_2.classes ['cat']

dsmerged = sv.DetectionDataset.merge([ds1, ds2]) len(dsmerged) 300 ds_merged.classes ['cat', 'dog', 'person'] ```

Snap (47)

  • Additional start and end arguments to sv.get_video_frames_generator allowing to generate frames only for a selected part of the video. (https://github.com/roboflow/supervision/pull/162)

๐Ÿ› ๏ธ Fixed

  • Incorrect loading of YOLO dataset class names from data.yaml. (https://github.com/roboflow/supervision/pull/157)

๐Ÿ† Contributors

@SkalskiP @hardikdava

- Python
Published by SkalskiP over 2 years ago

supervision - supervision-0.10.0

๐Ÿš€ Added

  • Ability to load and save sv.ClassificationDataset in a folder structure format. (https://github.com/roboflow/supervision/pull/125)

```python

import supervision as sv

cs = sv.ClassificationDataset.fromfolderstructure( ... rootdirectorypath='...' ... )

cs.asfolderstructure( ... rootdirectorypath='...' ... ) ```

  • Support for sv.ClassificationDataset.split allowing to divide sv.ClassificationDataset into two parts. (https://github.com/roboflow/supervision/pull/125)

```python

import supervision as sv

cs = sv.ClassificationDataset(...) traincs, testcs = cs.split(splitratio=0.7, randomstate=42, shuffle=True)

len(traincs), len(testcs) (700, 300) ```

  • Ability to extract masks from Roboflow API results using sv.Detections.from_roboflow. (https://github.com/roboflow/supervision/pull/110)

  • Supervision Quickstart notebook where you can learn more about Detection, Dataset and Video APIs.

Screenshot 2023-06-14 at 15 33 27

๐ŸŒฑ Changed

  • sv.get_video_frames_generator documentation to better describe actual behavior. (https://github.com/roboflow/supervision/pull/135)

Snap (45)

๐Ÿ† Contributors

@capjamesg @dankresio @SkalskiP

- Python
Published by SkalskiP over 2 years ago

supervision - supervision-0.9.0

๐Ÿš€ Added

  • Ability to select sv.Detections by index, list of indexes or slice. Here is an example illustrating the new selection methods. (https://github.com/roboflow/supervision/pull/118)

```python

import supervision as sv

detections = sv.Detections(...) len(detections[0]) 1 len(detections[[0, 1]]) 2 len(detections[0:2]) 2 ```

supervision-0_9_0-Snap (4)

  • Ability to extract masks from YOLOv8 results using sv.Detections.from_yolov8. Here is an example illustrating how to extract boolean masks from the result of the YOLOv8 model inference. (https://github.com/roboflow/supervision/pull/101)

```python

import cv2 from ultralytics import YOLO import supervision as sv

image = cv2.imread(...) image.shape (640, 640, 3)

model = YOLO('yolov8s-seg.pt') result = model(image)[0] detections = sv.Detections.from_yolov8(result) detections.mask.shape (2, 640, 640) ```

  • Ability to crop the image using sv.crop. Here is an example showing how to get a separate crop for each detection in sv.Detections. (https://github.com/roboflow/supervision/pull/122)

```python

import cv2 import supervision as sv

image = cv2.imread(...) detections = sv.Detections(...) len(detections) 2 crops = [ ... sv.crop(image=image, xyxy=xyxy) ... for xyxy ... in detections.xyxy ... ] len(crops) 2 ```

  • Ability to conveniently save multiple images into directory using sv.ImageSink. An example shows how to save every tenth video frame as a separate image. (https://github.com/roboflow/supervision/pull/120)

```python

import supervision as sv

with sv.ImageSink(targetdirpath='target/directory/path') as sink: ... for image in sv.getvideoframesgenerator(sourcepath='sourcevideo.mp4', stride=10): ... sink.saveimage(image=image) ```

๐Ÿ› ๏ธ Fixed

  • Inconvenient handling of sv.PolygonZone coordinates. Now sv.PolygonZone accepts coordinates in the form of [[x1, y1], [x2, y2], ...] that can be both integers and floats. (https://github.com/roboflow/supervision/issues/106)

๐Ÿ† Contributors

@SkalskiP @lomnes-atlast-food @hardikdava

- Python
Published by SkalskiP over 2 years ago

supervision - supervision-0.8.0

๐Ÿš€ Added

  • Support for dataset inheritance. The current Dataset got renamed to DetectionDataset. Now DetectionDataset inherits from BaseDataset. This change was made to enforce the future consistency of APIs of different types of computer vision datasets. (https://github.com/roboflow/supervision/pull/100)
  • Ability to save datasets in YOLO format using DetectionDataset.as_yolo. (https://github.com/roboflow/supervision/pull/100)

```python

import supervision as sv

ds = sv.DetectionDataset(...) ds.asyolo( ... imagesdirectorypath='...', ... annotationsdirectorypath='...', ... datayaml_path='...' ... ) ```

  • Support for DetectionDataset.split allowing to divide DetectionDataset into two parts. (https://github.com/roboflow/supervision/pull/102)

```python

import supervision as sv

ds = sv.DetectionDataset(...) trainds, testds = ds.split(splitratio=0.7, randomstate=42, shuffle=True)

len(trainds), len(testds) (700, 300) ```

๐ŸŒฑ Changed

  • Default value of approximation_percentage parameter from 0.75 to 0.0 in DetectionDataset.as_yolo and DetectionDataset.as_pascal_voc. (https://github.com/roboflow/supervision/pull/100)

1

๐Ÿ† Contributors

  • @SkalskiP

- Python
Published by SkalskiP almost 3 years ago

supervision - supervision-0.7.0

๐Ÿš€ Added

  • Detections.from_yolo_nas to enable seamless integration with YOLO-NAS model. (https://github.com/roboflow/supervision/pull/91)
  • Ability to load datasets in YOLO format using Dataset.from_yolo. (https://github.com/roboflow/supervision/pull/86)
  • Detections.merge to merge multiple Detections objects together. (https://github.com/roboflow/supervision/pull/84)

๐ŸŒฑ Changed

  • LineZoneAnnotator.annotate to allow for the custom text for the in and out tags. (https://github.com/roboflow/supervision/pull/44)

๐Ÿ› ๏ธ Fixed

  • LineZoneAnnotator.annotate does not return annotated frame. (https://github.com/roboflow/supervision/pull/81)

๐Ÿ† Contributors

  • @SkalskiP
  • @iPoe
  • @hardikdava

- Python
Published by SkalskiP almost 3 years ago

supervision - supervision-0.6.0

๐Ÿš€ Added

  • Initial Dataset support and ability to save Detections in Pascal VOC XML format. (https://github.com/roboflow/supervision/pull/71)
  • New mask_to_polygons, filter_polygons_by_area, polygon_to_xyxy and approximate_polygon utilities. (https://github.com/roboflow/supervision/pull/71)
  • Ability to load Pascal VOC XML object detections dataset as Dataset. (https://github.com/roboflow/supervision/pull/72)

๐ŸŒฑ Changed

  • order of Detections attributes to make it consistent with order of objects in __iter__ tuple. (https://github.com/roboflow/supervision/pull/70)
  • generate_2d_mask to polygon_to_mask. (https://github.com/roboflow/supervision/pull/71)

๐Ÿ† Contributors

  • @SkalskiP
  • @alexandercarruthers

- Python
Published by SkalskiP almost 3 years ago

supervision - supervision-0.5.2

๐Ÿ› ๏ธ Fixed

  • Fixed LineZone.trigger function expects 4 values instead of 5 (https://github.com/roboflow/supervision/pull/63)

๐Ÿ† Contributors

  • @SkalskiP @ChaseDDevelopment

- Python
Published by SkalskiP almost 3 years ago

supervision - supervision-0.5.1

๐Ÿ› ๏ธ Fixed

  • Fixed Detections.__getitem__ method did not return mask for selected item.
  • Fixed Detections.area crashed for mask detections.

๐Ÿ† Contributors

  • @SkalskiP

- Python
Published by SkalskiP almost 3 years ago

supervision - supervision-0.5.0

๐Ÿš€ Added

  • Detections.mask to enable segmentation support. (https://github.com/roboflow/supervision/pull/58)
  • MaskAnnotator to allow easy Detections.mask annotation. (https://github.com/roboflow/supervision/pull/58)
  • Detections.from_sam to enable native Segment Anything Model (SAM) support. (https://github.com/roboflow/supervision/pull/58)

๐ŸŒฑ Changed

  • Detections.area behaviour to work not only with boxes but also with masks. (https://github.com/roboflow/supervision/pull/58)

๐Ÿ† Contributors

  • @SkalskiP

- Python
Published by SkalskiP almost 3 years ago

supervision - supervision-0.4.0

๐Ÿš€ Added

  • Detections.empty to allow easy creation of empty Detections objects. (https://github.com/roboflow/supervision/discussions/48)
  • Detections.from_roboflow to allow easy creation of Detections objects from Roboflow API inference results. (https://github.com/roboflow/supervision/pull/56)
  • plot_images_grid to allow easy plotting of multiple images on single plot. (https://github.com/roboflow/supervision/pull/56)
  • Initial support for Pascal VOC XML format with detections_to_voc_xml method. (https://github.com/roboflow/supervision/pull/56)

๐ŸŒฑ Changed

  • show_frame_in_notebook refactored and renamed to plot_image. (https://github.com/roboflow/supervision/pull/56)

๐Ÿ† Contributors

  • @SkalskiP

- Python
Published by SkalskiP almost 3 years ago

supervision - supervision-0.3.2

๐ŸŒฑ Changed

  • Drop requirement for class_id in sv.Detections (https://github.com/roboflow/supervision/pull/50) to make it more flexible

๐Ÿ† Contributors

  • @SkalskiP

- Python
Published by SkalskiP almost 3 years ago

supervision - supervision-0.3.1

๐ŸŒฑ Changed

  • Detections.wth_nms support class agnostic and non-class agnostic case (https://github.com/roboflow/supervision/pull/36)

๐Ÿ› ๏ธ Fixed

  • PolygonZone throws an exception when the object touches the bottom edge of the image (https://github.com/roboflow/supervision/issues/41)
  • Detections.wth_nms method throws an exception when Detections is empty (https://github.com/roboflow/supervision/issues/42)

๐Ÿ† Contributors

  • @SkalskiP

- Python
Published by SkalskiP almost 3 years ago

supervision - supervision-0.3.0

๐Ÿš€ Added

New methods in sv.Detections API: - from_transformers - convert Object Detection ๐Ÿค— Transformer result into sv.Detections - from_detectron2 - convert Detectron2 result into sv.Detections - from_coco_annotations - convert COCO annotation into sv.Detections - area - dynamically calculated property storing bbox area - with_nms - initial implementation (only class agnostic) of sv.Detections NMS

๐ŸŒฑ Changed

  • Make sv.Detections.confidence field Optional.

๐Ÿ† Contributors

  • @SkalskiP

- Python
Published by SkalskiP almost 3 years ago

supervision - supervision-0.2.0

๐Ÿ”ช Killer features

  • Support for PolygonZone and PolygonZoneAnnotator ๐Ÿ”ฅ
๐Ÿ‘‰ Code example ```python import numpy as np import supervision as sv from ultralytics import YOLO # initiate polygon zone polygon = np.array([ [1900, 1250], [2350, 1250], [3500, 2160], [1250, 2160] ]) video_info = sv.VideoInfo.from_video_path(MALL_VIDEO_PATH) zone = sv.PolygonZone(polygon=polygon, frame_resolution_wh=video_info.resolution_wh) # initiate annotators box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2) zone_annotator = sv.PolygonZoneAnnotator(zone=zone, color=sv.Color.white(), thickness=6, text_thickness=6, text_scale=4) # extract video frame generator = sv.get_video_frames_generator(MALL_VIDEO_PATH) iterator = iter(generator) frame = next(iterator) # detect model = YOLO('yolov8s.pt') results = model(frame, imgsz=1280)[0] detections = sv.Detections.from_yolov8(results) detections = detections[detections.class_id == 0] zone.trigger(detections=detections) # annotate box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2) labels = [f"{model.names[class_id]} {confidence:0.2f}" for _, confidence, class_id, _ in detections] frame = box_annotator.annotate(scene=frame, detections=detections, labels=labels) frame = zone_annotator.annotate(scene=frame) ```

supervision-0-2-0

  • Advanced vs.Detections filtering with pandas-like API.

python detections = detections[(detections.class_id == 0) & (detections.confidence > 0.5)]

  • Improved integration with YOLOv5 and YOLOv8 models.

```python import torch import supervision as sv

model = torch.hub.load('ultralytics/yolov5', 'yolov5x6') results = model(frame, size=1280) detections = sv.Detections.from_yolov5(results) ```

```python from ultralytics import YOLO import supervision as sv

model = YOLO('yolov8s.pt') results = model(frame, imgsz=1280)[0] detections = sv.Detections.from_yolov8(results) ```

๐Ÿš€ Added

  • supervision.get_polygon_center function - takes in a polygon as a 2-dimensional numpy.ndarray and returns the center of the polygon as a Point object
  • supervision.draw_polygon function - draw a polygon on a scene
  • supervision.draw_text function - draw a text on a scene
  • supervision.ColorPalette.default() - class method - to generate default ColorPalette
  • supervision.generate_2d_mask function - generate a 2D mask from a polygon
  • supervision.PolygonZone class - to define polygon zones and validate if supervision.Detections are in the zone
  • supervision.PolygonZoneAnnotator class - to draw supervision.PolygonZone on scene

๐ŸŒฑ Changed

  • VideoInfo API - change the property name resolution -> resolution_wh to make it more descriptive; convert VideoInfo to dataclass
  • process_frame API - change argument name frame -> scene to make it consistent with other classes and methods
  • LineCounter API - rename class LineCounter -> LineZone to make it consistent with PolygonZone
  • LineCounterAnnotator API - rename class LineCounterAnnotator -> LineZoneAnnotator

๐Ÿ† Contributors

  • @SkalskiP
  • @capjamesg

- Python
Published by capjamesg about 3 years ago

supervision - supervision-0.1.0

๐Ÿš€ Added

  • โ“’ Add project license
  • ๐ŸŽจ DEFAULT_COLOR_PALETTE, Color, and ColorPalette classes
  • ๐Ÿ“ initial implementation of Point, Vector, and Rect classes
  • ๐ŸŽฌ VideoInfo and VideoSink classes as well as get_video_frames_generator -๐Ÿ““ show_frame_in_notebook util
  • ๐Ÿ–Œ๏ธ draw_line, draw_rectangle, draw_filled_rectangle utils added
  • ๐Ÿ“ฆ Initial version Detections and BoxAnnotator added
  • ๐Ÿงฎ initial implementation of LineCounter and LineCounterAnnotator classes

๐Ÿ† Contributors

@SkalskiP

- Python
Published by SkalskiP about 3 years ago