Recent Releases of supervision
supervision - supervision-0.26.1
๐ง Fixed
- Fixed error in
sv.MeanAveragePrecisionwhere the area used for size-specific evaluation (small / medium / large) was always zero unless explicitly provided insv.Detections.data. (https://github.com/roboflow/supervision/pull/1894) - Fixed
ID=0bug insv.MeanAveragePrecisionwhere objects were getting0.0mAP despite perfect IoU matches due to a bug in annotation ID assignment. (https://github.com/roboflow/supervision/pull/1895) - Fixed issue where
sv.MeanAveragePrecisioncould return negative values when certain object size categories have no data. (https://github.com/roboflow/supervision/pull/1898) - Fixed
match_metricsupport forsv.Detections.with_nms. (https://github.com/roboflow/supervision/pull/1901) - Fixed
border_thicknessparameter usage forsv.PercentageBarAnnotator. (https://github.com/roboflow/supervision/pull/1906)
๐ Contributors
@balthazur (Balthasar Huber), @onuralpszr (Onuralp SEZER), @rafaelpadilla (Rafael Padilla), @soumik12345 (Soumik Rakshit), @SkalskiP (Piotr Skalski)
- Python
Published by soumik12345 7 months ago
supervision - supervision-0.26.0
[!WARNING]
supervision-0.26.0dropspython3.8support and upgrade all codes topython3.9syntax style.[!TIP] Our docs page now has a fresh look that is consistent with the documentations of all Roboflow open-source projects. (#1858)
๐ Added
Added support for creating
sv.KeyPointsobjects from ViTPose and ViTPose++ inference results viasv.KeyPoints.from_transformers. (#1788)https://github.com/user-attachments/assets/f1917032-29d8-4b88-b871-65c2e28a756e
Added support for the IOS (Intersection over Smallest) overlap metric that measures how much of the smaller object is covered by the larger one in
sv.Detections.with_nms,sv.Detections.with_nmm,sv.box_iou_batch, andsv.mask_iou_batch. (#1774)```python import numpy as np import supervision as sv
boxestrue = np.array([ [100, 100, 200, 200], [300, 300, 400, 400] ]) boxesdetection = np.array([ [150, 150, 250, 250], [320, 320, 420, 420] ])
sv.boxioubatch( boxestrue=boxestrue, boxesdetection=boxesdetection, overlap_metric=sv.OverlapMetric.IOU )
array([[0.14285714, 0. ],
[0. , 0.47058824]])
sv.boxioubatch( boxestrue=boxestrue, boxesdetection=boxesdetection, overlap_metric=sv.OverlapMetric.IOS )
array([[0.25, 0. ],
[0. , 0.64]])
```
Added
sv.box_iouthat efficiently computes the Intersection over Union (IoU) between two individual bounding boxes. (#1874)Added support for frame limitations and progress bar in
sv.process_video. (#1816)Added
sv.xyxy_to_xcycarhfunction to convert bounding box coordinates from(x_min, y_min, x_max, y_max)into measurement space to format(center x, center y, aspect ratio, height), where the aspect ratio iswidth / height. (#1823)Addedย
sv.xyxy_to_xywhfunction to convert bounding box coordinates from(x_min, y_min, x_max, y_max)format to(x, y, width, height)format. (#1788)
๐ฑ Changed
sv.LabelAnnotatornow supports thesmart_positionparameter to automatically keep labels within frame boundaries, and themax_line_lengthparameter to control text wrapping for long or multi-line labels. (#1820)https://github.com/user-attachments/assets/361c17c7-0810-466d-907d-c752e91bc6f7
sv.LabelAnnotatornow supports non-string labels. (#1825)sv.Detections.from_vlmnow supports parsing bounding boxes and segmentation masks from responses generated by Google Gemini models. You can test Gemini prompting, result parsing, and visualization with Supervision using this example notebook. (#1792)
```python import supervision as sv
gemini_response_text = """```json
[
{"box_2d": [543, 40, 728, 200], "label": "cat", "id": 1},
{"box_2d": [653, 352, 820, 522], "label": "dog", "id": 2}
]
```"""
detections = sv.Detections.from_vlm(
sv.VLM.GOOGLE_GEMINI_2_5,
gemini_response_text,
resolution_wh=(1000, 1000),
classes=['cat', 'dog'],
)
detections.xyxy
# array([[543., 40., 728., 200.], [653., 352., 820., 522.]])
detections.data
# {'class_name': array(['cat', 'dog'], dtype='<U26')}
detections.class_id
# array([0, 1])
```
sv.Detections.from_vlmnow supports parsing bounding boxes from responses generated by Moondream. (#1878)```python import supervision as sv
moondreamresult = { 'objects': [ { 'xmin': 0.5704046934843063, 'ymin': 0.20069346576929092, 'xmax': 0.7049859315156937, 'ymax': 0.3012596592307091 }, { 'xmin': 0.6210969910025597, 'ymin': 0.3300672620534897, 'xmax': 0.8417936339974403, 'y_max': 0.4961046129465103 } ] }
detections = sv.Detections.fromvlm( sv.VLM.MOONDREAM, moondreamresult, resolution_wh=(1000, 1000), )
detections.xyxy
array([[1752.28, 818.82, 2165.72, 1229.14],
[1908.01, 1346.67, 2585.99, 2024.11]])
```
sv.Detections.from_vlmnow supports parsing bounding boxes from responses generated by Qwen-2.5 VL. You can test Qwen2.5-VL prompting, result parsing, and visualization with Supervision using this example notebook. (#1709)```python import supervision as sv
qwen25vlresult = """
json [ {"bbox_2d": [139, 768, 315, 954], "label": "cat"}, {"bbox_2d": [366, 679, 536, 849], "label": "dog"} ]"""detections = sv.Detections.fromvlm( sv.VLM.QWEN25VL, qwen25vlresult, inputwh=(1000, 1000), resolutionwh=(1000, 1000), classes=['cat', 'dog'], )
detections.xyxy
array([[139., 768., 315., 954.], [366., 679., 536., 849.]])
detections.class_id
array([0, 1])
detections.data
{'class_name': array(['cat', 'dog'], dtype='<U10')}
detections.class_id
array([0, 1])
```
Significantly improved the speed of HSV color mapping in
sv.HeatMapAnnotator, achieving approximately 28x faster performance on 1920x1080 frames. (#1786)
๐ง Fixed
Supervisionโs
sv.MeanAveragePrecisionis now fully aligned with pycocotools, the official COCO evaluation tool, ensuring accurate and standardized metrics. (#1834)```python import supervision as sv from supervision.metrics import MeanAveragePrecision
predictions = sv.Detections(...) targets = sv.Detections(...)
mapmetric = MeanAveragePrecision() mapmetric.update(predictions, targets).compute()
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.464
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.637
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.203
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.284
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.497
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.629
```
[!TIP] The updated mAP implementation enabled us to build an updated version of the Computer Vision Model Leaderboard.
- Fix #1767: Fixed losingย
sv.Detections.dataย when detections filtering.
โ ๏ธ Deprecated
sv.LMMenum is deprecated and will be removed insupervision-0.31.0. Usesv.VLMinstead.sv.Detections.from_lmmproperty is deprecated and will be removed insupervision-0.31.0. Usesv.Detections.from_vlminstead.
โ Removed
- The
sv.DetectionDataset.imagesproperty has been removed insupervision-0.26.0. Please loop over images withfor path, image, annotation in dataset:, as that does not require loading all images into memory. - Cconstructing
sv.DetectionDatasetwith parameterimagesasDict[str, np.ndarray]is deprecated and has been removed insupervision-0.26.0. Please pass a list of pathsList[str]instead. - The name
sv.BoundingBoxAnnotatoris deprecated and has been removed insupervision-0.26.0. It has been renamed tosv.BoxAnnotator.
๐ Contributors
@onuralpszr (Onuralp SEZER), @SkalskiP (Piotr Skalski), @SunHao-AI (Hao Sun), @rafaelpadilla Rafael Padilla, @Ashp116 (Ashp116), @capjamesg (James Gallagher), @blakeburch (Blake Burch), @hidara2000 (hidara2000), @Armaggheddon (Alessandro Brunello), @soumik12345 (Soumik Rakshit).
- Python
Published by soumik12345 7 months ago
supervision - supervision-0.25.0
Supervision 0.25.0 is here! Featuring a more robust LineZone crossing counter, support for tracking KeyPoints, Python 3.13 compatibility, and 3 new metrics: Precision, Recall and Mean Average Recall. The update also includes smart label positioning, improved Oriented Bounding Box support, and refined error handling. Thank you to all contributors - especially those who answered the call of Hacktoberfest!
Changelog
๐ Added
- Essential update to the
LineZone: when computing line crossings, detections that jitter might be counted twice (or more!). This can now be solved with theminimum_crossing_thresholdargument. If you set it to2or more, extra frames will be used to confirm the crossing, improving the accuracy significantly. (#1540)
https://github.com/user-attachments/assets/89ca2ee6-93c9-41e6-a432-e16c4c69c695
- It is now possible to track objects detected as
KeyPoints. See the complete step-by-step guide in the Object Tracking Guide. (#1658)
```python import numpy as np import supervision as sv from ultralytics import YOLO
model = YOLO("yolov8m-pose.pt") tracker = sv.ByteTrack() trace_annotator = sv.TraceAnnotator()
def callback(frame: np.ndarray, : int) -> np.ndarray: results = model(frame)[0] keypoints = sv.KeyPoints.from_ultralytics(results)
detections = key_points.as_detections()
detections = tracker.update_with_detections(detections)
annotated_image = trace_annotator.annotate(frame.copy(), detections)
return annotated_image
sv.processvideo( sourcepath="inputvideo.mp4", targetpath="output_video.mp4", callback=callback ) ```
https://github.com/user-attachments/assets/4c3bdf54-391e-4633-9164-f15878ddfb33
See the guide for the full code used to make the video
Added
is_emptymethod toKeyPointsto check if there are any keypoints in the object. (#1658)Added
as_detectionsmethod toKeyPointsthat convertsKeyPointstoDetections. (#1658)Added a new video to
supervision[assets]. (#1657)
```python from supervision.assets import download_assets, VideoAssets
pathtovideo = download_assets(VideoAssets.SKIING) ```
- Supervision can now be used with
Python 3.13. The most renowned update is the ability to run Python without Global Interpreter Lock (GIL). We expect support for this among our dependencies to be inconsistent, but if you do attempt it - let us know the results! (#1595)
- Added
Mean Average RecallmAR metric, which returns a recall score, averaged over IoU thresholds, detected object classes, and limits imposed on maximum considered detections. (#1661)
```python import supervision as sv from supervision.metrics import MeanAverageRecall
predictions = sv.Detections(...) targets = sv.Detections(...)
mapmetric = MeanAverageRecall() mapresult = map_metric.update(predictions, targets).compute()
map_result.plot() ```
- Added
PrecisionandRecallmetrics, providing a baseline for comparing model outputs to ground truth or another model (#1609)
```python import supervision as sv from supervision.metrics import Recall
predictions = sv.Detections(...) targets = sv.Detections(...)
recallmetric = Recall() recallresult = recall_metric.update(predictions, targets).compute()
recall_result.plot() ```

- All Metrics now support Oriented Bounding Boxes (OBB) (#1593)
```python import supervision as sv from supervision.metrics import F1_Score
predictions = sv.Detections(...) targets = sv.Detections(...)
f1metric = MeanAverageRecall(metrictarget=sv.MetricTarget.ORIENTEDBOUNDINGBOXES) f1result = f1metric.update(predictions, targets).compute() ```

- Introducing Smart Labels! When
smart_positionis set forLabelAnnotator,RichLabelAnnotatororVertexLabelAnnotator, the labels will move around to avoid overlapping others. (#1625)
```python import supervision as sv from ultralytics import YOLO
image = cv2.imread("image.jpg")
labelannotator = sv.LabelAnnotator(smartposition=True)
model = YOLO("yolo11m.pt") results = model(image)[0] detections = sv.Detections.from_ultralytics(results)
annotatedframe = labelannotator.annotate(firstframe.copy(), detections) sv.plotimage(annotated_frame) ```
https://github.com/user-attachments/assets/ef768db4-867d-4305-b905-80e690bb1ea7
- Added the
metadatavariable toDetections. It allows you to store custom data per-image, rather than per-detected-object as was possible withdatavariable. For example,metadatacould be used to store the source video path, camera model or camera parameters. (#1589)
```python import supervision as sv from ultralytics import YOLO
model = YOLO("yolov8m")
result = model("image.png")[0] detections = sv.Detections.from_ultralytics(result)
Items in data must match length of detections
objectids = [num for num in range(len(detections))] detections.data["objectnumber"] = object_ids
Items in metadata can be of any length.
detections.metadata["camera_model"] = "Luxonis OAK-D" ```
- Added a
py.typedtype hints metafile. It should provide a stronger signal to type annotators and IDEs that type support is available. (#1586)
๐ฑ Changed
ByteTrackno longer requiresdetectionsto have aclass_id(#1637)draw_line,draw_rectangle,draw_filled_rectangle,draw_polygon,draw_filled_polygonandPolygonZoneAnnotatornow comes with a default color (#1591)- Dataset classes are treated as case-sensitive when merging multiple datasets. (#1643)
- Expanded metrics documentation with example plots and printed results (#1660)
- Added usage example for polygon zone (#1608)
- Small improvements to error handling in polygons: (#1602)
๐ง Fixed
- Updated
ByteTrack, removing shared variables. Previously, multiple instances ofByteTrackwould share some date, requiring liberal use oftracker.reset(). (#1603), (#1528) - Fixed a bug where
class_agnosticsetting inMeanAveragePrecisionwould not work. (#1577) hacktoberfest - Removed welcome workflow from our CI system. (#1596)
โ No removals or deprecations this time!
โ๏ธ Internal Changes
- Large refactor of
ByteTrack(#1603)- STrack moved to separate class
- Remove superfluous
BaseTrackclass - Removed unused variables
- Large refactor of
RichLabelAnnotator, matching its contents withLabelAnnotator. (#1625)
๐ Contributors
@onuralpszr (Onuralp SEZER), @kshitijaucharmal (KshitijAucharmal), @grzegorz-roboflow (Grzegorz Klimaszewski), @Kadermiyanyedi (Kader Miyanyedi), @PrakharJain1509 (Prakhar Jain), @DivyaVijay1234 (Divya Vijay), @souhhmm (Soham Kalburgi), @joaomarcoscrs (Joรฃo Marcos Cardoso Ramos da Silva), @AHuzail (Ahmad Huzail Khan), @DemyCode (DemyCode), @ablazejuk (Andrey Blazejuk), @LinasKo (Linas Kondrackis)
A special thanks goes out to everyone who joined us for Hacktoberfest! We hope it was a rewarding experience and look forward to seeing you continue contributing and growing with our community. Keep building, keep innovatingโyour efforts make a difference! ๐
- Python
Published by LinasKo over 1 year ago
supervision - supervision-0.24.0
Supervision 0.24.0 is here! We've added many new changes, including the F1 score, enhancements to LineZone, EasyOCR support, NCNN support, and the best Cookbook to date! You can also try out our annotators directly in the browser. Check out the release notes to find out more!
๐ข Announcements
Supervision is celebrating Hacktoberfest! Whether you're a newcomer to open source or a veteran contributor, we welcome you to join us in improving
supervision. You can grab any issue without an assigned contributor: Hacktoberfest Issues Board. We'll be adding many more issues next week! ๐We recently launched the Model Leaderboard. Come check how the latest models perform! It is also open-source, so you can contribute to it as well! ๐
Changelog
๐ Added
- Added F1 score as a new metric for detection and segmentation. The F1 score balances precision and recall, providing a single metric for model evaluation. #1521
```python import supervision as sv from supervision.metrics import F1Score
predictions = sv.Detections(...) targets = sv.Detections(...)
f1metric = F1Score() f1result = f1_metric.update(predictions, targets).compute()
print(f1result)
print(f1result.f150)
print(f1result.smallobjects.f150)
```
- Added new cookbook: Small Object Detection with SAHI. This cookbook provides a detailed guide on using
InferenceSlicerfor small object detection, and is one of the best cookbooks we've ever seen. Thank you @ediardo! #1483
- You can now try supervision annotators on your own images. Check out the annotator docs. The preview is powered by an Embedded Workflow. Thank you @joaomarcoscrs! #1533
- Enhanced
LineZoneAnnotator, allowing the labels to align with the line, even when it's not horizontal. Also, you can now disable text background, and choose to draw labels off-center which minimizes overlaps for multipleLineZonelabels. Thank you @jcruz-ferreyra! #854
```python import supervision as sv import cv2
image = cv2.imread("
linezone = sv.LineZone( start=sv.Point(0, 100), end=sv.Point(50, 200) ) linezoneannotator = sv.LineZoneAnnotator( textorienttoline=True, displaytextbox=False, text_centered=False )
annotatedframe = linezoneannotator.annotate( frame=image.copy(), linecounter=line_zone )
sv.plot_image(frame) ```
https://github.com/user-attachments/assets/d7694b81-26ca-4236-bc66-af3d9e79d367
- Added per-class counting capabilities to
LineZoneand introducedLineZoneAnnotatorMulticlassfor visualizing the counts per class. This feature allows tracking of individual classes crossing a line, enhancing the flexibility of use cases like traffic monitoring or crowd analysis. #1555
```python import supervision as sv import cv2
image = cv2.imread("
linezone = sv.LineZone( start=sv.Point(0, 100), end=sv.Point(50, 200) ) linezone_annotator = sv.LineZoneAnnotatorMulticlass()
frame = linezoneannotator.annotate( frame=frame, linezones=[linezone] )
sv.plot_image(frame) ```
https://github.com/user-attachments/assets/b109f5bd-6ae7-473b-b4e8-910a869736b4
- Added
from_easyocr, allowing integration of OCR results into the supervision framework. EasyOCR is an open-source optical character recognition (OCR) library that can read text from images. Thank you @onuralpszr! #1515
```python import supervision as sv import easyocr import cv2
image = cv2.imread("
reader = easyocr.Reader(["en"])
result = reader.readtext("
boxannotator = sv.BoxAnnotator(colorlookup=sv.ColorLookup.INDEX) labelannotator = sv.LabelAnnotator(colorlookup=sv.ColorLookup.INDEX)
annotatedimage = image.copy() annotatedimage = boxannotator.annotate(scene=annotatedimage, detections=detections) annotatedimage = labelannotator.annotate(scene=annotated_image, detections=detections)
sv.plotimage(annotatedimage) ```
- Added
oriented_box_iou_batchfunction todetection.utils. This function computes Intersection over Union (IoU) for oriented or rotated bounding boxes (OBB), making it easier to evaluate detections with non-axis-aligned boxes. Thank you @patel-zeel! #1502
```python import numpy as np
boxestrue = np.array([[[1, 0], [0, 1], [3, 4], [4, 3]]]) boxesdetection = np.array([[[1, 1], [2, 0], [4, 2], [3, 3]]]) ious = sv.orientedboxioubatch(boxestrue, boxes_detection) print("IoU between true and detected boxes:", ious) ```
Note: the IoU is approximated as mask IoU.
Extended
PolygonZoneAnnotatorto allow setting opacity when drawing zones, providing enhanced visualization by filling the zone with adjustable transparency. Thank you @grzegorz-roboflow! #1527Added
from_ncnn, a connector for the NCNN. It is a powerful object detection framework from Tencent, written from ground-up in C++, with no third party dependencies. Thank you @onuralpszr! #1524
```python import cv2 from ncnn.modelzoo import getmodel import supervision as sv
image = cv2.imread("
๐ฑ Changed
Supervision now depends on
opencv-pythonrather thanopencv-python-headless. #1530Fixed broken or outdated links in documentation and notebooks, improving navigation and ensuring accuracy of references. Thanks to @capjamesg for identifying these issues. #1523
Enabled and fixed Ruff rules for code formatting, including changes like avoiding unnecessary iterable allocations and using Optional for default mutable arguments. #1526
๐ง Fixed
Updated the COCO 101 point Average Precision algorithm to correctly interpolate precision, providing a more precise calculation of average precision without averaging out intermediate values. #1500
Resolved miscellaneous issues highlighted when building documentation. This mostly includes whitespace adjustments and type inconsistencies. Updated documentation for clarity and fixed formatting issues. Added explicit version for
mkdocstrings-python. #1549Clarified documentation around the
overlap_ratio_whargument deprecation inInferenceSlicer. #1547
โ No deprecations this time!
โ Removed
- The
frame_resolution_whparameter inPolygonZonehas been removed due to deprecation. - Supervision installation methods "headless" and "desktop" removed, as they are no longer needed.
pip install supervision[headless]will install the base library and warn of non-existent extra.
๐ Contributors
@onuralpszr (Onuralp SEZER), @joaomarcoscrs (Joรฃo Marcos Cardoso Ramos da Silva), @jcruz-ferreyra (Juan Cruz), @patel-zeel (Zeel B Patel), @grzegorz-roboflow (Grzegorz Klimaszewski), @Kadermiyanyedi (Kader Miyanyedi), @ediardo (Eddie Ramirez), @CharlesCNorton, @ethanwhite (Ethan White), @josephofiowa (Joseph Nelson), @tibeoh (Thibault Itart-Longueville), @SkalskiP (Piotr Skalski), @LinasKo (Linas Kondrackis)
Thank you to Pexels for providing fantastic images and videos!
- Python
Published by LinasKo over 1 year ago
supervision - supervision-0.23.0
๐ Added
BackgroundOverlayAnnotatorannotates the background of your image! #1385
https://github.com/user-attachments/assets/c1f3ce11-08c1-4648-9176-4e7920b91a8a
(video by Pexels)
- We're introducing metrics, which currently supports
xyxyboxes and masks. Over the next few releases,supervisionwill focus on adding more metrics, allowing you to evaluate your model performance. We plan to support not just boxes, masks, but oriented bounding boxes as well! #1442
[!TIP] Help in implementing metrics is very welcome! Keep an eye on our issue board if you'd like to contribute!
```python import supervision as sv from supervision.metrics import MeanAveragePrecision
predictions = sv.Detections(...) targets = sv.Detections(...)
mapmetric = MeanAveragePrecision() mapresult = map_metric.update(predictions, targets).compute()
print(mapresult) print(mapresult.map5095) print(mapresult.largeobjects.map5095) map_result.plot() ```
Here's a very basic way to compare model results:
๐ Example code
```python import supervision as sv from supervision.metrics import MeanAveragePrecision from inference import get_model import matplotlib.pyplot as plt # !wget https://media.roboflow.com/notebooks/examples/dog.jpeg image = "dog.jpeg" model_1 = get_model("yolov8n-640") model_2 = get_model("yolov8s-640") model_3 = get_model("yolov8m-640") model_4 = get_model("yolov8l-640") results_1 = model_1.infer(image)[0] results_2 = model_2.infer(image)[0] results_3 = model_3.infer(image)[0] results_4 = model_4.infer(image)[0] detections_1 = sv.Detections.from_inference(results_1) detections_2 = sv.Detections.from_inference(results_2) detections_3 = sv.Detections.from_inference(results_3) detections_4 = sv.Detections.from_inference(results_4) map_n_metric = MeanAveragePrecision().update([detections_1], [detections_4]).compute() map_s_metric = MeanAveragePrecision().update([detections_2], [detections_4]).compute() map_m_metric = MeanAveragePrecision().update([detections_3], [detections_4]).compute() labels = ["YOLOv8n", "YOLOv8s", "YOLOv8m"] map_values = [map_n_metric.map50_95, map_s_metric.map50_95, map_m_metric.map50_95] plt.title("YOLOv8 Model Comparison") plt.bar(labels, map_values) ax = plt.gca() ax.set_ylim([0, 1]) plt.show() ```- Added the
IconAnnotator, which allows you to place icons on your images. #930
https://github.com/user-attachments/assets/ff80acf5-67f2-4c20-a3fe-b63cac07ae31
(Video by Pexels, icons by Icons8)
```python import supervision as sv from inference import get_model
image =
model = getmodel(modelid="yolov8n-640") results = model.infer(image)[0] detections = sv.Detections.from_inference(results)
iconpaths = [] for classname in detections.data["classname"]: if classname == "dog": iconpaths.append(icondog) elif classname == "cat": iconpaths.append(iconcat) else: iconpaths.append("")
iconannotator = sv.IconAnnotator() annotatedframe = iconannotator.annotate( scene=image.copy(), detections=detections, iconpath=icon_paths ) ```
- Segment Anything 2 was released this month. And while you can load its results via
from_sam, we've added support tofrom_ultralyticsfor loading the results if you ran it with Ultralytics. #1354
```python import cv2 import supervision as sv from ultralytics import SAM
image = cv2.imread("...")
model = SAM("mobilesam.pt") results = model(image, bboxes=[[588, 163, 643, 220]]) detections = sv.Detections.fromultralytics(results[0])
polygonannotator = sv.PolygonAnnotator() maskannotator = sv.MaskAnnotator()
annoatedimage = maskannotator.annotate(image.copy(), detections) annoatedimage = polygonannotator.annotate(annoated_image, detections)
sv.plotimage(annoatedimage, (12,12)) ```
SAM2 with our annotators:
https://github.com/user-attachments/assets/6a98d651-2596-43e9-b485-ea6f0de4fffa
TriangleAnnotatorandDotAnnotatorcontour color customization #1458VertexLabelAnnotatorfor keypoints now hastext_colorparameter #1409
๐ฑ Changed
- Updated
sv.Detections.from_transformersto support thetransformers v5functions. This includes theDetrImageProcessormethodspost_process_object_detection,post_process_panoptic_segmentation,post_process_semantic_segmentation, andpost_process_instance_segmentation. #1386 InferenceSlicernow features anoverlap_ratio_whparameter, making it easier to compute slice sizes when handling overlapping slices. #1434
```python imagewithsmallobjects = cv2.imread("...") model = getmodel("yolov8n-640")
def callback(imageslice: np.ndarray) -> sv.Detections: print("imageslice.shape:", imageslice.shape) result = model.infer(imageslice)[0] return sv.Detections.from_inference(result)
slicer = sv.InferenceSlicer( callback=callback, slicewh=(128, 128), overlapratio_wh=(0.2, 0.2), )
detections = slicer(imagewithsmall_objects) ```
๐ ๏ธ Fixed
- Annotator type fixes #1448
- New way of seeking to a specific video frame, where other methods don't work #1348
plot_imagenow clearly states the size is in inches. #1424
โ ๏ธ Deprecated
overlap_filter_strategyinInferenceSlicer.__init__is deprecated and will be removed insupervision-0.27.0. Useoverlap_strategyinstead.overlap_ratio_whinInferenceSlicer.__init__is deprecated and will be removed insupervision-0.27.0. Useoverlap_whinstead.
โ Removed
- The
track_buffer,track_thresh, andmatch_threshparameters inByteTrackare deprecated and were removed as ofsupervision-0.23.0. Uselost_track_buffer,track_activation_threshold, andminimum_matching_thresholdinstead. - The
triggering_positionparameter insv.PolygonZonewas removed as ofsupervision-0.23.0. Usetriggering_anchorsinstead.
๐ Contributors
@shaddu, @onuralpszr (Onuralp SEZER), @Kadermiyanyedi (Kader Miyanyedi), @xaristeidou (Christoforos Aristeidou), @Gk-rohan (Rohan Gupta), @Bhavay-2001 (Bhavay Malhotra), @arthurcerveira (Arthur Cerveira), @J4BEZ (Ju Hoon Park), @venkatram-dev, @eric220, @capjamesg (James), @yeldarby (Brad Dwyer), @SkalskiP (Piotr Skalski), @LinasKo (LinasKo)
- Python
Published by LinasKo over 1 year ago
supervision - supervision-0.22.0
๐ Added
sv.KeyPoints.from_mediapipeadding support for Mediapipe keypoint models (both legacy and modern), along with default visualizers for face and body pose keypoints. (#1232, #1316)
```python import numpy as np import mediapipe as mp import supervision as sv from PIL import Image
model = mp.solutions.face_mesh.FaceMesh()
edge_annotator = sv.EdgeAnnotator(color=sv.Color.BLACK, thickness=2)
image = Image.open(
annotatedimage = edgeannotator.annotate(scene=image, keypoints=keypoints) ```
https://github.com/user-attachments/assets/883a6bcc-5e39-41b0-9b6d-0348b5b2fe0e
sv.KeyPoints.from_detectron2andsv.Detections.from_detectron2extending support for Detectron2 models. (#1310, #1300)sv.RichLabelAnnotatorallowing to draw unicode characters (e.g. from non-latin languages), as long as you provide a compatible font. (#1277)
https://github.com/user-attachments/assets/de60eeb4-1259-421b-af66-f622a15988ea
๐ฑ Changed
sv.DetectionsDatasetandsv.ClassificationDatasetallowing to load the images into memory only when necessary (lazy loading). (#1326)
```python import roboflow from roboflow import Roboflow import supervision as sv
roboflow.login() rf = Roboflow()
project = rf.workspace(
dstrain = sv.DetectionDataset.fromcoco( imagesdirectorypath=f"{dataset.location}/train", annotationspath=f"{dataset.location}/train/annotations.coco.json", )
path, image, annotation = ds_train[0] # loads image on demand
for path, image, annotation in ds_train: # loads image on demand ```
sv.Detections.from_lmmallowing to parse Florence-2 text result intosv.Detectionsobject. (#1296)
sv.DotAnnotatorandsv.TriangleAnnotatorallowing to add marker outlines. (#1294)
๐ ๏ธ Fixed
sv.ColorAnnotatorandsv.CropAnnotatorbuggy behaviours. (#1277, #1312)
๐งโ๐ณ Cookbooks
This release, @onuralpszr added two new Cookbooks to our collection. Check them out to learn how to save Detections to a file and convert it back to Detections!
๐ Contributors
@onuralpszr (Onuralp SEZER), @David-rn (David Redรณ), @jeslinpjames (Jeslin P James), @Bhavay-2001 (Bhavay Malhotra), @hardikdava (Hardik Dava), @kirilman, @dsaha21 (Dripto Saha), @cdragos (Dragos Catarahia), @mqasim41 (Muhammad Qasim), @SkalskiP (Piotr Skalski), @LinasKo (Linas Kondrackis)
Special thanks to @rolson24 (Raif Olson) for helping the community with ByteTrack!
- Python
Published by LinasKo over 1 year ago
supervision - supervision-0.21.0
๐ Timeline
The supervision-0.21.0 release is around the corner. Here is the timeline:
5 Jun 2024 08:00 PM CEST (UTC +2) / 5 Jun 2024 11:00 AM PDT (UTC -7)- mergedevelopintomain- closing listsupervision-0.21.0features6 Jun 2024 11:00 AM CEST (UTC +2) / 6 Jun 2024 02:00 AM PDT (UTC -7)- releasesupervision-0.21.0
๐ชต Changelog
๐ Added
sv.Detections.with_nmmto perform non-maximum merging on the current set of object detections. (#500)
sv.Detections.from_lmmallowing to parse Large Multimodal Model (LMM) text result intosv.Detectionsobject. For nowfrom_lmmsupports only PaliGemma result parsing. (#1221)
```python import supervision as sv
paligemmaresult = "
array([[250., 250., 750., 750.]])
detections.class_id
array([0])
```
sv.VertexLabelAnnotatorallowing to annotate every vertex of a keypoint skeleton with custom text and color. (#1236)
```python import supervision as sv
image = ... key_points = sv.KeyPoints(...)
LABELS = [ "nose", "left eye", "right eye", "left ear", "right ear", "left shoulder", "right shoulder", "left elbow", "right elbow", "left wrist", "right wrist", "left hip", "right hip", "left knee", "right knee", "left ankle", "right ankle" ]
COLORS = [ "#FF6347", "#FF6347", "#FF6347", "#FF6347", "#FF6347", "#FF1493", "#00FF00", "#FF1493", "#00FF00", "#FF1493", "#00FF00", "#FFD700", "#00BFFF", "#FFD700", "#00BFFF", "#FFD700", "#00BFFF" ] COLORS = [sv.Color.fromhex(colorhex=c) for c in COLORS]
vertexlabelannotator = sv.VertexLabelAnnotator( color=COLORS, textcolor=sv.Color.BLACK, borderradius=5 ) annotatedframe = vertexlabelannotator.annotate( scene=image.copy(), keypoints=key_points, labels=labels ) ```
sv.KeyPoints.from_inferenceandsv.KeyPoints.from_yolo_nasallowing to createsv.KeyPointsfrom Inference and YOLO-NAS result. (#1147 and #1138)sv.mask_to_rleandsv.rle_to_maskallowing for easy conversion between mask and rle formats. (#1163)
๐ฑ Changed
sv.InferenceSlicerallowing to select overlap filtering strategy (NONE,NON_MAX_SUPPRESSIONandNON_MAX_MERGE). (#1236)sv.InferenceSliceradding instance segmentation model support. (#1178)
```python import cv2 import numpy as np import supervision as sv from inference import get_model
model = getmodel(modelid="yolov8x-seg-640")
image = cv2.imread(
def callback(imageslice: np.ndarray) -> sv.Detections: results = model.infer(imageslice)[0] return sv.Detections.from_inference(results)
slicer = sv.InferenceSlicer(callback = callback) detections = slicer(image)
maskannotator = sv.MaskAnnotator() labelannotator = sv.LabelAnnotator()
annotatedimage = maskannotator.annotate( scene=image, detections=detections) annotatedimage = labelannotator.annotate( scene=annotated_image, detections=detections) ```
sv.LineZonemaking it 10-20 times faster, depending on the use case. (#1228)
sv.DetectionDataset.from_cocoandsv.DetectionDataset.as_cocoadding support for run-length encoding (RLE) mask format. (#1163)
๐ Contributors
@onuralpszr (Onuralp SEZER), @LinasKo (Linas Kondrackis), @rolson24 (Raif Olson), @mario-dg (Mario da Graca), @xaristeidou (Christoforos Aristeidou), @ManzarIMalik (Manzar Iqbal Malik), @tc360950 (Tomasz Cฤ kaลa), @emSko, @SkalskiP (Piotr Skalski)
- Python
Published by SkalskiP over 1 year ago
supervision - supervision-0.20.0
๐ Added
sv.KeyPointsto provide initial support for pose estimation and broader keypoint detection models. (#1128)sv.EdgeAnnotatorandsv.VertexAnnotatorto enable rendering of results from keypoint detection models. (#1128)
```python import cv2 import supervision as sv from ultralytics import YOLO
image = cv2.imread(
result = model(image, verbose=False)[0] keypoints = sv.KeyPoints.from_ultralytics(result)
edgeannotators = sv.EdgeAnnotator(color=sv.Color.GREEN, thickness=5) annotatedimage = edge_annotators.annotate(image.copy(), keypoints) ```
```python import cv2 import supervision as sv from ultralytics import YOLO
image = cv2.imread(
result = model(image, verbose=False)[0] keypoints = sv.KeyPoints.from_ultralytics(result)
vertexannotators = sv.VertexAnnotator(color=sv.Color.GREEN, radius=10) annotatedimage = vertex_annotators.annotate(image.copy(), keypoints) ```
๐ฑ Changed
sv.LabelAnnotatorby adding an additionalcorner_radiusargument that allows for rounding the corners of the bounding box. (#1037)sv.PolygonZonesuch that theframe_resolution_whargument is no longer required to initializesv.PolygonZone. (#1109)
[!WARNING]
Theframe_resolution_whparameter insv.PolygonZoneis deprecated and will be removed insupervision-0.24.0.
sv.get_polygon_centerto calculate a more accurate polygon centroid. (#1084)sv.Detections.from_transformersby adding support for Transformers segmentation models and extract class names values. (#1069)
```python import torch import supervision as sv from PIL import Image from transformers import DetrImageProcessor, DetrForSegmentation
processor = DetrImageProcessor.frompretrained("facebook/detr-resnet-50-panoptic") model = DetrForSegmentation.frompretrained("facebook/detr-resnet-50-panoptic")
image = Image.open(
with torch.no_grad(): outputs = model(**inputs)
width, height = image.size targetsize = torch.tensor([[height, width]]) results = processor.postprocesssegmentation( outputs=outputs, targetsizes=targetsize)[0] detections = sv.Detections.fromtransformers(results, id2label=model.config.id2label)
maskannotator = sv.MaskAnnotator() labelannotator = sv.LabelAnnotator(text_position=sv.Position.CENTER)
annotatedimage = maskannotator.annotate( scene=image, detections=detections) annotatedimage = labelannotator.annotate( scene=annotated_image, detections=detections) ```
๐ ๏ธ Fixed
sv.ByteTrack.update_with_detectionswhich was removing segmentation masks while tracking. Now,ByteTrackcan be used alongside segmentation models. (#787)
๐ Contributors
@onuralpszr (Onuralp SEZER), @rolson24 (Raif Olson), @xaristeidou (Christoforos Aristeidou), @jeslinpjames (Jeslin P James), @Griffin-Sullivan (Griffin Sullivan), @PawelPeczek-Roboflow (Paweล Pฤczek), @pirnerjonas (Jonas Pirner), @sharingan000, @macc-n, @LinasKo (Linas Kondrackis), @SkalskiP (Piotr Skalski)
- Python
Published by SkalskiP almost 2 years ago
supervision - supervision-0.19.0
๐งโ๐ณ Cookbooks
Supervision Cookbooks - A curated open-source collection crafted by the community, offering practical examples, comprehensive guides, and walkthroughs for leveraging Supervision alongside diverse Computer Vision models. (#860)
๐ Added
sv.CSVSinkallowing for the straightforward saving of image, video, or stream inference results in a.csvfile. (#818)
```python import supervision as sv from ultralytics import YOLO
model = YOLO(
with csvsink:
for frame in framesgenerator:
result = model(frame)[0]
detections = sv.Detections.fromultralytics(result)
csvsink.append(detections, customdata={<CUSTOMLABEL>:
https://github.com/roboflow/supervision/assets/26109316/621588f9-69a0-44fe-8aab-ab4b0ef2ea1b
sv.JSONSinkallowing for the straightforward saving of image, video, or stream inference results in a.jsonfile. (#819)
```python import supervision as sv from ultralytics import YOLO
model = YOLO(
with jsonsink:
for frame in framesgenerator:
result = model(frame)[0]
detections = sv.Detections.fromultralytics(result)
jsonsink.append(detections, customdata={<CUSTOMLABEL>:
sv.mask_iou_batchallowing to compute Intersection over Union (IoU) of two sets of masks. (#847)sv.mask_non_max_suppressionallowing to perform Non-Maximum Suppression (NMS) on segmentation predictions. (#847)sv.CropAnnotatorallowing users to annotate the scene with scaled-up crops of detections. (#888)
```python import cv2 import supervision as sv from inference import get_model
image = cv2.imread(
result = model.infer(image)[0] detections = sv.Detections.from_inference(result)
cropannotator = sv.CropAnnotator() annotatedframe = crop_annotator.annotate( scene=image.copy(), detections=detections ) ```
https://github.com/roboflow/supervision/assets/26109316/0a5b67ce-55e7-4e26-9495-a68f9ad97ec7
๐ฑ Changed
sv.ByteTrack.resetallowing users to clear trackers state, enabling the processing of multiple video files in sequence. (#827)sv.LineZoneAnnotatorallowing to hide in/out count usingdisplay_in_countanddisplay_out_countproperties. (#802)sv.ByteTrackinput arguments and docstrings updated to improve readability and ease of use. (#787)
[!WARNING]
Thetrack_buffer,track_thresh, andmatch_threshparameters insv.ByterTrackare deprecated and will be removed insupervision-0.23.0. Uselost_track_buffer,track_activation_threshold, andminimum_matching_thresholdinstead.
sv.PolygonZoneto now accept a list of specific box anchors that must be in zone for a detection to be counted. (#910)
[!WARNING]
Thetriggering_positionparameter insv.PolygonZoneis deprecated and will be removed insupervision-0.23.0. Usetriggering_anchorsinstead.
- Annotators adding support for Pillow images. All supervision Annotators can now accept an image as either a numpy array or a Pillow Image. They automatically detect its type, draw annotations, and return the output in the same format as the input. (#875)
๐ ๏ธ Fixed
sv.DetectionsSmootherremovingtracking_idfromsv.Detections. (#944)sv.DetectionDatasetwhich, after changes introduced insupervision-0.18.0, failed to load datasets in YOLO, PASCAL VOC, and COCO formats.
๐ Contributors
@onuralpszr (Onuralp SEZER), @LinasKo (Linas Kondrackis), @LeviVasconcelos (Levi Vasconcelos), @AdonaiVera (Adonai Vera), @xaristeidou (Christoforos Aristeidou), @Kadermiyanyedi (Kader Miyanyedi), @NickHerrig (Nick Herrig), @PacificDou (Shuyang Dou), @iamhatesz (Tomasz Wrona), @capjamesg (James Gallagher), @sansyo, @SkalskiP (Piotr Skalski)
- Python
Published by SkalskiP almost 2 years ago
supervision - supervision-0.18.0
๐ Added
sv.PercentageBarAnnotatorallowing to annotate images and videos with percentage values representing confidence or other custom property. (#720)
```python import supervision as sv
image = ... detections = sv.Detections(...)
percentagebarannotator = sv.PercentageBarAnnotator() annotatedframe = percentagebar_annotator.annotate( scene=image.copy(), detections=detections ) ```
sv.RoundBoxAnnotatorallowing to annotate images and videos with rounded corners bounding boxes. (#702)sv.DetectionsSmootherallowing for smoothing detections over multiple frames in video tracking. (#696)
https://github.com/roboflow/supervision/assets/26109316/4dd703ad-ffba-492b-97ff-1be84e237e83
sv.OrientedBoxAnnotatorallowing to annotate images and videos with OBB (Oriented Bounding Boxes). (#770)
```python import cv2 import supervision as sv from ultralytics import YOLO
image = cv2.imread(
result = model(image)[0] detections = sv.Detections.from_ultralytics(result)
orientedboxannotator = sv.OrientedBoxAnnotator() annotatedframe = orientedbox_annotator.annotate( scene=image.copy(), detections=detections ) ```
sv.ColorPalette.from_matplotliballowing users to create asv.ColorPaletteinstance from a Matplotlib color palette. (#769)
```python import supervision as sv
sv.ColorPalette.from_matplotlib('viridis', 5)
ColorPalette(colors=[Color(r=68, g=1, b=84), Color(r=59, g=82, b=139), ...])
```
๐ฑ Changed
sv.Detections.from_ultralyticsadding support for OBB (Oriented Bounding Boxes). (#770)sv.LineZoneto now accept a list of specific box anchors that must cross the line for a detection to be counted. This update marks a significant improvement from the previous requirement, where all four box corners were necessary. Users can now specify a single anchor, such assv.Position.BOTTOM_CENTER, or any other combination of anchors defined asList[sv.Position]. (#735)sv.Detectionsto support custom payload. (#700)sv.Color's andsv.ColorPalette's method of accessing predefined colors, transitioning from a function-based approach (sv.Color.red()) to a more intuitive and conventional property-based method (sv.Color.RED). (#756) (#769)
[!WARNING]
sv.ColorPalette.default()is deprecated and will be removed insupervision-0.21.0. Usesv.ColorPalette.DEFAULTinstead.
sv.ColorPalette.DEFAULTvalue, giving users a more extensive set of annotation colors. (#769)
sv.Detections.from_roboflowtosv.Detections.from_inferencestreamlining its functionality to be compatible with both the both inference pip package and the Roboflow hosted API. (#677)
[!WARNING]
Detections.from_roboflow()is deprecated and will be removed insupervision-0.21.0. UseDetections.from_inferenceinstead.
```python import cv2 import supervision as sv from inference.models.utils import getroboflowmodel
image = cv2.imread(
result = model.infer(image)[0] detections = sv.Detections.from_inference(result) ```
๐ ๏ธ Fixed
sv.LineZonefunctionality to accurately update the counter when an object crosses a line from any direction, including from the side. This enhancement enables more precise tracking and analytics, such as calculating individual in/out counts for each lane on the road. (#735)
https://github.com/roboflow/supervision/assets/26109316/412c4d9c-b228-4bcc-a4c7-e6a0c8f2da6e
๐ Contributors
@onuralpszr (Onuralp SEZER), @HinePo (Rafael Levy), @xaristeidou (Christoforos Aristeidou), @revtheundead (Utku รzbek), @paulguerrie (Paul Guerrie), @yeldarby (Brad Dwyer), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
- Python
Published by SkalskiP about 2 years ago
supervision - supervision-0.17.1
๐ Added
- Support for Python 3.12.
๐ Contributors
@onuralpszr (Onuralp SEZER), @SkalskiP (Piotr Skalski)
- Python
Published by SkalskiP about 2 years ago
supervision - supervision-0.17.0
๐ Added
sv.PixelateAnnotatorallowing to pixelate objects on images and videos. (#633)
https://github.com/roboflow/supervision/assets/26109316/c2d4b3b1-fd19-44bb-94ec-f21b28dfd05f
sv.TriangleAnnotatorallowing to annotate images and videos with triangle markers. (#652)sv.PolygonAnnotatorallowing to annotate images and videos with segmentation mask outline. (#602)```python
import supervision as sv
image = ... detections = sv.Detections(...)
polygonannotator = sv.PolygonAnnotator() annotatedframe = polygon_annotator.annotate( ... scene=image.copy(), ... detections=detections ... ) ```
https://github.com/roboflow/supervision/assets/26109316/c9236bf7-6ba4-4799-bf2a-b5532ad3591b
sv.assetsallowing download of video files that you can use in your demos. (#476)```python
from supervision.assets import downloadassets, VideoAssets downloadassets(VideoAssets.VEHICLES) "vehicles.mp4" ```
Position.CENTER_OF_MASSallowing to place labels in center of mass of segmentation masks. (#605)sv.scale_boxesallowing to scalesv.Detections.xyxyvalues. (#651)sv.calculate_dynamic_text_scaleandsv.calculate_dynamic_line_thicknessallowing text scale and line thickness to match image resolution. (#637)sv.Color.as_hexallowing to extract color value in HEX format. (#620)sv.Classifications.from_timmallowing to load classification result from timm models. (#572)sv.Classifications.from_clipallowing to load classification result from clip model. (#478)sv.Detections.from_azure_analyze_imageallowing to load detection results from Azure Image Analysis. (#571)
๐ฑ Changed
sv.BoxMaskAnnotatorrenaming it tosv.ColorAnnotator. (#646)sv.MaskAnnotatorto make it 5x faster. (#606)
๐ ๏ธ Fixed
sv.DetectionDataset.from_yoloto ignore empty lines in annotation files. (#584)sv.BlurAnnotatorto trim negative coordinates before bluring detections. (#555)sv.TraceAnnotatorto respect trace position. (#511)
๐ Contributors
@onuralpszr (Onuralp SEZER), @hugoles (Hugo Dutra), @karanjakhar (Karan Jakhar), @kim-jeonghyun (Jeonghyun Kim), @fdloopes ( Felipe Lopes), @abhishek7kalra (Abhishek Kalra), @SummitStudiosDev, @xenteros @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
- Python
Published by SkalskiP about 2 years ago
supervision - supervision-0.16.0
๐ Added
https://github.com/roboflow/supervision/assets/26109316/691e219c-0565-4403-9218-ab5644f39bce
sv.BoxMaskAnnotatorallowing to annotate images and videos with mox masks. (#422)sv.HaloAnnotatorallowing to annotate images and videos with halo effect. (#433)
```python
import supervision as sv
image = ... detections = sv.Detections(...)
haloannotator = sv.HaloAnnotator() annotatedframe = halo_annotator.annotate( ... scene=image.copy(), ... detections=detections ... ) ```
sv.HeatMapAnnotatorallowing to annotate videos with heat maps. (#466)sv.DotAnnotatorallowing to annotate images and videos with dots. (#492)sv.draw_imageallowing to draw an image onto a given scene with specified opacity and dimensions. (#449)sv.FPSMonitorfor monitoring frames per second (FPS) to benchmark latency. (#280)- ๐ค Hugging Face Annotators space. (#454)
๐ฑ Changed
sv.LineZone.triggernow returnTuple[np.ndarray, np.ndarray]. The first array indicates which detections have crossed the line from outside to inside. The second array indicates which detections have crossed the line from inside to outside. (#482)- Annotator argument name from
color_map: strtocolor_lookup: ColorLookupenum to increase type safety. (#465) sv.MaskAnnotatorallowing 2x faster annotation. (#426)
๐ ๏ธ Fixed
- Poetry env definition allowing proper local installation. (#477)
sv.ByteTrackto returnnp.array([], dtype=int)whensvDetectionsis empty. (#430)- YOLONAS detection missing predication part added & fixed (#416)
- SAM detection at Demo Notebook
MaskAnnotator(color_map="index")color_mapset toindex(#416)
๐๏ธ Deleted
Warning Deleted
sv.Detections.from_yolov8andsv.Classifications.from_yolov8as those are now replaced bysv.Detections.from_ultralyticsandsv.Classifications.from_ultralytics. (#438)
๐ Contributors
@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @kapter, @keshav278 (Keshav Subramanian), @akashpambhar (Akash Pambhar), @AntonioConsiglio (Antonio Consiglio), @ashishdatta, @mario-dg (Mario da Graca), @ jayaBalaR (JAYABALAMBIKA.R), @abhishek7kalra (Abhishek Kalra), @PankajKrana (Pankaj Kumar Rana), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
- Python
Published by SkalskiP over 2 years ago
supervision - supervision-0.15.0
๐ Added
https://github.com/roboflow/supervision/assets/26109316/4d6c4a70-b40e-48fc-9e58-23b7e67bf94a
sv.LabelAnnotatorallowing to annotate images and videos with text. (#170)sv.BoundingBoxAnnotatorallowing to annotate images and videos with bounding boxes. (#170)sv.BoxCornerAnnotatorallowing to annotate images and videos with just bounding box corners. (#170)sv.MaskAnnotatorallowing to annotate images and videos with segmentation masks. (#170)sv.EllipseAnnotatorallowing to annotate images and videos with ellipses (sports game style). (#170)sv.CircleAnnotatorallowing to annotate images and videos with circles. (#386)sv.TraceAnnotatorallowing to draw path of moving objects on videos. (#354)sv.BlurAnnotatorallowing to blur objects on images and videos. (#405)
```python
import supervision as sv
image = ... detections = sv.Detections(...)
boundingboxannotator = sv.BoundingBoxAnnotator() annotatedframe = boundingbox_annotator.annotate( ... scene=image.copy(), ... detections=detections ... ) ```
- Supervision usage example. You can now learn how to perform traffic flow analysis with Supervision. (#354)
https://github.com/roboflow/supervision/assets/26109316/c9436828-9fbf-4c25-ae8c-60e9c81b3900
๐ฑ Changed
sv.Detections.from_roboflownow does not requireclass_listto be specified. Theclass_idvalue can be extracted directly from the inference response. (#399)sv.VideoSinknow allows to customize the output codec. (#381)sv.InferenceSlicercan now operate in multithreading mode. (#361)
๐ ๏ธ Fixed
sv.Detections.from_deepsparseto allow processing empty deepsparse result object. (#348)
๐ Contributors
@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @Killua7362 (Akshay Bhat), @fcakyon (Fatih C. Akyon), @akashAD98 (Akash A Desai), @Rajarshi-Misra, @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
- Python
Published by SkalskiP over 2 years ago
supervision - 0.14.0
๐ Added
- Support for SAHI inference technique with
sv.InferenceSlicer. (#282)
```python
import cv2 import supervision as sv import numpy as np from ultralytics import YOLO
image = cv2.imread(SOURCEIMAGEPATH) model = YOLO(...)
def callback(imageslice: np.ndarray) -> sv.Detections: ... result = model(imageslice)[0] ... return sv.Detections.from_ultralytics(result)
slicer = sv.InferenceSlicer(callback = callback)
detections = slicer(image) ```
https://github.com/roboflow/supervision/assets/26109316/da665575-4d74-469c-a1f7-a43b7ee7e214
Detections.from_deepsparseto enable seamless integration with DeepSparse framework. (#297)sv.Classifications.from_ultralyticsto enable seamless integration with Ultralytics framework. This will enable you to use supervision with all models that Ultralytics supports. (#281)Warning sv.Detections.from_yolov8 and sv.Classifications.from_yolov8 are now deprecated and will be removed with supervision-0.16.0 release.
First supervision usage example script showing how to detect and track objects on video using YOLOv8 + Supervision. (#341)
https://github.com/roboflow/supervision/assets/26109316/d8128440-6bd7-491a-8c7d-519254b76ec5
๐ฑ Changed
sv.ClassificationDatasetandsv.DetectionDatasetnow use image path (not image name) as dataset keys. (#296)
๐ ๏ธ Fixed
Detections.from_roboflowto filter out polygons with less than 3 points. (#300)
๐ Contributors
@hardikdava (Hardik Dava), @onuralpszr (Onuralp SEZER), @mayankagarwals (Mayank Agarwal), @rizavelioglu (Riza Velioglu), @arjun-234 (Arjun D.), @mwitiderrick (Derrick Mwiti), @ShubhamKanitkar32, @gasparitiago (Tiago De Gaspari), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
- Python
Published by capjamesg over 2 years ago
supervision - supervision-0.13.0
๐ Added
- Support for mean average precision (mAP) for object detection models with
sv.MeanAveragePrecision. (#236)
```python
import supervision as sv from ultralytics import YOLO
dataset = sv.DetectionDataset.from_yolo(...)
model = YOLO(...) def callback(image: np.ndarray) -> sv.Detections: ... result = model(image)[0] ... return sv.Detections.from_yolov8(result)
meanaverageprecision = sv.MeanAveragePrecision.benchmark( ... dataset = dataset, ... callback = callback ... )
meanaverageprecision.map50_95 0.433 ```
- Support for
ByteTrackfor object tracking withsv.ByteTrack. (#256)
```python
import supervision as sv from ultralytics import YOLO
model = YOLO(...) byte_tracker = sv.ByteTrack() annotator = sv.BoxAnnotator()
def callback(frame: np.ndarray, index: int) -> np.ndarray: ... results = model(frame)[0] ... detections = sv.Detections.fromyolov8(results) ... detections = bytetracker.updatefromdetections(detections=detections) ... labels = [ ... f"#{trackerid} {model.model.names[classid]} {confidence:0.2f}" ... for , _, confidence, classid, tracker_id ... in detections ... ] ... return annotator.annotate(scene=frame.copy(), detections=detections, labels=labels)
sv.processvideo( ... sourcepath='...', ... target_path='...', ... callback=callback ... ) ```
https://github.com/roboflow/supervision/assets/26109316/d5d393f5-e577-474a-bc8c-82483ef8a578
sv.Detections.from_ultralyticsto enable seamless integration with Ultralytics framework. This will enable you to usesupervisionwith all models that Ultralytics supports. (#222)Warning
sv.Detections.from_yolov8is now deprecated and will be removed withsupervision-0.15.0release.sv.Detections.from_paddledetto enable seamless integration with PaddleDetection framework. (#191)Support for loading PASCAL VOC segmentation datasets with
sv.DetectionDataset.. (#245)
๐ Contributors
@hardikdava (Hardik Dava), @kirilllzaitsev (Kirill Zaitsev), @onuralpszr (Onuralp SEZER), @dbroboflow, @mayankagarwals (Mayank Agarwal), @danigarciaoca (Daniel M. Garcรญa-Ocaรฑa), @capjamesg (James Gallagher), @SkalskiP (Piotr Skalski)
- Python
Published by capjamesg over 2 years ago
supervision - supervision-0.12.0
Warning With the
supervision-0.12.0release, we are terminating official support for Python 3.7. (#179)
๐ Added
- Initial support for object detection model benchmarking with
sv.ConfusionMatrix. (#177)
```python
import supervision as sv from ultralytics import YOLO
dataset = sv.DetectionDataset.from_yolo(...)
model = YOLO(...) def callback(image: np.ndarray) -> sv.Detections: ... result = model(image)[0] ... return sv.Detections.from_yolov8(result)
confusion_matrix = sv.ConfusionMatrix.benchmark( ... dataset = dataset, ... callback = callback ... )
confusion_matrix.matrix array([ [0., 0., 0., 0.], [0., 1., 0., 1.], [0., 1., 1., 0.], [1., 1., 0., 0.] ]) ```
Detections.from_mmdetectionto enable seamless integration with MMDetection framework. (#173)Ability to install package in
headlessordesktopmode. (#130)
๐ฑ Changed
- Packing method from
setup.pytopyproject.toml. (#180)
๐ ๏ธ Fixed
sv.DetectionDataset.from_cooccan't be loaded when there are images without annotations. (#188)sv.DetectionDataset.from_yolocan't load background instances. (#226)
๐ Contributors
@kirilllzaitsev @hardikdava @onuralpszr @Ucag @SkalskiP @capjamesg
- Python
Published by capjamesg over 2 years ago
supervision - supervision-0.11.1
๐ ๏ธ Fixed
as_folder_structurefails to savesv.ClassificationDatasetwhen it is result of inference. (https://github.com/roboflow/supervision/pull/165)
๐ Contributors
@capjamesg @SkalskiP
- Python
Published by SkalskiP over 2 years ago
supervision - supervision-0.11.0
๐ Added
- Ability to load and save
sv.DetectionDatasetin COCO format usingas_cocoandfrom_cocomethods. (https://github.com/roboflow/supervision/pull/150)
```python
import supervision as sv
ds = sv.DetectionDataset.fromcoco( ... imagesdirectorypath='...', ... annotationspath='...' ... )
ds.ascoco( ... imagesdirectorypath='...', ... annotationspath='...' ... ) ```
- Ability to marge multiple
sv.DetectionDatasettogether usingmergemethod. (https://github.com/roboflow/supervision/pull/158)
```python
import supervision as sv
ds1 = sv.DetectionDataset(...) len(ds1) 100 ds_1.classes ['dog', 'person']
ds2 = sv.DetectionDataset(...) len(ds2) 200 ds_2.classes ['cat']
dsmerged = sv.DetectionDataset.merge([ds1, ds2]) len(dsmerged) 300 ds_merged.classes ['cat', 'dog', 'person'] ```
- Additional
startandendarguments tosv.get_video_frames_generatorallowing to generate frames only for a selected part of the video. (https://github.com/roboflow/supervision/pull/162)
๐ ๏ธ Fixed
- Incorrect loading of YOLO dataset class names from
data.yaml. (https://github.com/roboflow/supervision/pull/157)
๐ Contributors
@SkalskiP @hardikdava
- Python
Published by SkalskiP over 2 years ago
supervision - supervision-0.10.0
๐ Added
- Ability to load and save
sv.ClassificationDatasetin a folder structure format. (https://github.com/roboflow/supervision/pull/125)
```python
import supervision as sv
cs = sv.ClassificationDataset.fromfolderstructure( ... rootdirectorypath='...' ... )
cs.asfolderstructure( ... rootdirectorypath='...' ... ) ```
- Support for
sv.ClassificationDataset.splitallowing to dividesv.ClassificationDatasetinto two parts. (https://github.com/roboflow/supervision/pull/125)
```python
import supervision as sv
cs = sv.ClassificationDataset(...) traincs, testcs = cs.split(splitratio=0.7, randomstate=42, shuffle=True)
len(traincs), len(testcs) (700, 300) ```
Ability to extract masks from Roboflow API results using
sv.Detections.from_roboflow. (https://github.com/roboflow/supervision/pull/110)Supervision Quickstart notebook where you can learn more about Detection, Dataset and Video APIs.
๐ฑ Changed
sv.get_video_frames_generatordocumentation to better describe actual behavior. (https://github.com/roboflow/supervision/pull/135)
๐ Contributors
@capjamesg @dankresio @SkalskiP
- Python
Published by SkalskiP over 2 years ago
supervision - supervision-0.9.0
๐ Added
- Ability to select
sv.Detectionsby index, list of indexes or slice. Here is an example illustrating the new selection methods. (https://github.com/roboflow/supervision/pull/118)
```python
import supervision as sv
detections = sv.Detections(...) len(detections[0]) 1 len(detections[[0, 1]]) 2 len(detections[0:2]) 2 ```
- Ability to extract masks from YOLOv8 results using
sv.Detections.from_yolov8. Here is an example illustrating how to extract boolean masks from the result of the YOLOv8 model inference. (https://github.com/roboflow/supervision/pull/101)
```python
import cv2 from ultralytics import YOLO import supervision as sv
image = cv2.imread(...) image.shape (640, 640, 3)
model = YOLO('yolov8s-seg.pt') result = model(image)[0] detections = sv.Detections.from_yolov8(result) detections.mask.shape (2, 640, 640) ```
- Ability to crop the image using
sv.crop. Here is an example showing how to get a separate crop for each detection insv.Detections. (https://github.com/roboflow/supervision/pull/122)
```python
import cv2 import supervision as sv
image = cv2.imread(...) detections = sv.Detections(...) len(detections) 2 crops = [ ... sv.crop(image=image, xyxy=xyxy) ... for xyxy ... in detections.xyxy ... ] len(crops) 2 ```
- Ability to conveniently save multiple images into directory using
sv.ImageSink. An example shows how to save every tenth video frame as a separate image. (https://github.com/roboflow/supervision/pull/120)
```python
import supervision as sv
with sv.ImageSink(targetdirpath='target/directory/path') as sink: ... for image in sv.getvideoframesgenerator(sourcepath='sourcevideo.mp4', stride=10): ... sink.saveimage(image=image) ```
๐ ๏ธ Fixed
- Inconvenient handling of
sv.PolygonZonecoordinates. Nowsv.PolygonZoneaccepts coordinates in the form of[[x1, y1], [x2, y2], ...]that can be both integers and floats. (https://github.com/roboflow/supervision/issues/106)
๐ Contributors
@SkalskiP @lomnes-atlast-food @hardikdava
- Python
Published by SkalskiP over 2 years ago
supervision - supervision-0.8.0
๐ Added
- Support for dataset inheritance. The current
Datasetgot renamed toDetectionDataset. NowDetectionDatasetinherits fromBaseDataset. This change was made to enforce the future consistency of APIs of different types of computer vision datasets. (https://github.com/roboflow/supervision/pull/100) - Ability to save datasets in YOLO format using
DetectionDataset.as_yolo. (https://github.com/roboflow/supervision/pull/100)
```python
import supervision as sv
ds = sv.DetectionDataset(...) ds.asyolo( ... imagesdirectorypath='...', ... annotationsdirectorypath='...', ... datayaml_path='...' ... ) ```
- Support for
DetectionDataset.splitallowing to divideDetectionDatasetinto two parts. (https://github.com/roboflow/supervision/pull/102)
```python
import supervision as sv
ds = sv.DetectionDataset(...) trainds, testds = ds.split(splitratio=0.7, randomstate=42, shuffle=True)
len(trainds), len(testds) (700, 300) ```
๐ฑ Changed
- Default value of
approximation_percentageparameter from0.75to0.0inDetectionDataset.as_yoloandDetectionDataset.as_pascal_voc. (https://github.com/roboflow/supervision/pull/100)
๐ Contributors
- @SkalskiP
- Python
Published by SkalskiP almost 3 years ago
supervision - supervision-0.7.0
๐ Added
Detections.from_yolo_nasto enable seamless integration with YOLO-NAS model. (https://github.com/roboflow/supervision/pull/91)- Ability to load datasets in YOLO format using
Dataset.from_yolo. (https://github.com/roboflow/supervision/pull/86) Detections.mergeto merge multipleDetectionsobjects together. (https://github.com/roboflow/supervision/pull/84)
๐ฑ Changed
LineZoneAnnotator.annotateto allow for the custom text for the in and out tags. (https://github.com/roboflow/supervision/pull/44)
๐ ๏ธ Fixed
LineZoneAnnotator.annotatedoes not return annotated frame. (https://github.com/roboflow/supervision/pull/81)
๐ Contributors
- @SkalskiP
- @iPoe
- @hardikdava
- Python
Published by SkalskiP almost 3 years ago
supervision - supervision-0.6.0
๐ Added
- Initial
Datasetsupport and ability to saveDetectionsin Pascal VOC XML format. (https://github.com/roboflow/supervision/pull/71) - New
mask_to_polygons,filter_polygons_by_area,polygon_to_xyxyandapproximate_polygonutilities. (https://github.com/roboflow/supervision/pull/71) - Ability to load Pascal VOC XML object detections dataset as
Dataset. (https://github.com/roboflow/supervision/pull/72)
๐ฑ Changed
- order of
Detectionsattributes to make it consistent with order of objects in__iter__tuple. (https://github.com/roboflow/supervision/pull/70) generate_2d_masktopolygon_to_mask. (https://github.com/roboflow/supervision/pull/71)
๐ Contributors
- @SkalskiP
- @alexandercarruthers
- Python
Published by SkalskiP almost 3 years ago
supervision - supervision-0.5.2
๐ ๏ธ Fixed
- Fixed
LineZone.triggerfunction expects 4 values instead of 5 (https://github.com/roboflow/supervision/pull/63)
๐ Contributors
- @SkalskiP @ChaseDDevelopment
- Python
Published by SkalskiP almost 3 years ago
supervision - supervision-0.5.1
๐ ๏ธ Fixed
- Fixed
Detections.__getitem__method did not return mask for selected item. - Fixed
Detections.areacrashed for mask detections.
๐ Contributors
- @SkalskiP
- Python
Published by SkalskiP almost 3 years ago
supervision - supervision-0.5.0
๐ Added
Detections.maskto enable segmentation support. (https://github.com/roboflow/supervision/pull/58)MaskAnnotatorto allow easyDetections.maskannotation. (https://github.com/roboflow/supervision/pull/58)Detections.from_samto enable native Segment Anything Model (SAM) support. (https://github.com/roboflow/supervision/pull/58)
๐ฑ Changed
Detections.areabehaviour to work not only with boxes but also with masks. (https://github.com/roboflow/supervision/pull/58)
๐ Contributors
- @SkalskiP
- Python
Published by SkalskiP almost 3 years ago
supervision - supervision-0.4.0
๐ Added
Detections.emptyto allow easy creation of emptyDetectionsobjects. (https://github.com/roboflow/supervision/discussions/48)Detections.from_roboflowto allow easy creation ofDetectionsobjects from Roboflow API inference results. (https://github.com/roboflow/supervision/pull/56)plot_images_gridto allow easy plotting of multiple images on single plot. (https://github.com/roboflow/supervision/pull/56)- Initial support for Pascal VOC XML format with
detections_to_voc_xmlmethod. (https://github.com/roboflow/supervision/pull/56)
๐ฑ Changed
show_frame_in_notebookrefactored and renamed toplot_image. (https://github.com/roboflow/supervision/pull/56)
๐ Contributors
- @SkalskiP
- Python
Published by SkalskiP almost 3 years ago
supervision - supervision-0.3.2
๐ฑ Changed
- Drop requirement for
class_idinsv.Detections(https://github.com/roboflow/supervision/pull/50) to make it more flexible
๐ Contributors
- @SkalskiP
- Python
Published by SkalskiP almost 3 years ago
supervision - supervision-0.3.1
๐ฑ Changed
Detections.wth_nmssupport class agnostic and non-class agnostic case (https://github.com/roboflow/supervision/pull/36)
๐ ๏ธ Fixed
PolygonZonethrows an exception when the object touches the bottom edge of the image (https://github.com/roboflow/supervision/issues/41)Detections.wth_nmsmethod throws an exception whenDetectionsis empty (https://github.com/roboflow/supervision/issues/42)
๐ Contributors
- @SkalskiP
- Python
Published by SkalskiP almost 3 years ago
supervision - supervision-0.3.0
๐ Added
New methods in sv.Detections API:
- from_transformers - convert Object Detection ๐ค Transformer result into sv.Detections
- from_detectron2 - convert Detectron2 result into sv.Detections
- from_coco_annotations - convert COCO annotation into sv.Detections
- area - dynamically calculated property storing bbox area
- with_nms - initial implementation (only class agnostic) of sv.Detections NMS
๐ฑ Changed
- Make
sv.Detections.confidencefieldOptional.
๐ Contributors
- @SkalskiP
- Python
Published by SkalskiP almost 3 years ago
supervision - supervision-0.2.0
๐ช Killer features
- Support for
PolygonZoneandPolygonZoneAnnotator๐ฅ
๐ Code example
```python import numpy as np import supervision as sv from ultralytics import YOLO # initiate polygon zone polygon = np.array([ [1900, 1250], [2350, 1250], [3500, 2160], [1250, 2160] ]) video_info = sv.VideoInfo.from_video_path(MALL_VIDEO_PATH) zone = sv.PolygonZone(polygon=polygon, frame_resolution_wh=video_info.resolution_wh) # initiate annotators box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2) zone_annotator = sv.PolygonZoneAnnotator(zone=zone, color=sv.Color.white(), thickness=6, text_thickness=6, text_scale=4) # extract video frame generator = sv.get_video_frames_generator(MALL_VIDEO_PATH) iterator = iter(generator) frame = next(iterator) # detect model = YOLO('yolov8s.pt') results = model(frame, imgsz=1280)[0] detections = sv.Detections.from_yolov8(results) detections = detections[detections.class_id == 0] zone.trigger(detections=detections) # annotate box_annotator = sv.BoxAnnotator(thickness=4, text_thickness=4, text_scale=2) labels = [f"{model.names[class_id]} {confidence:0.2f}" for _, confidence, class_id, _ in detections] frame = box_annotator.annotate(scene=frame, detections=detections, labels=labels) frame = zone_annotator.annotate(scene=frame) ```
- Advanced
vs.Detectionsfiltering with pandas-like API.
python
detections = detections[(detections.class_id == 0) & (detections.confidence > 0.5)]
- Improved integration with
YOLOv5andYOLOv8models.
```python import torch import supervision as sv
model = torch.hub.load('ultralytics/yolov5', 'yolov5x6') results = model(frame, size=1280) detections = sv.Detections.from_yolov5(results) ```
```python from ultralytics import YOLO import supervision as sv
model = YOLO('yolov8s.pt') results = model(frame, imgsz=1280)[0] detections = sv.Detections.from_yolov8(results) ```
๐ Added
supervision.get_polygon_centerfunction - takes in a polygon as a 2-dimensionalnumpy.ndarrayand returns the center of the polygon as a Point objectsupervision.draw_polygonfunction - draw a polygon on a scenesupervision.draw_textfunction - draw a text on a scenesupervision.ColorPalette.default()- class method - to generate defaultColorPalettesupervision.generate_2d_maskfunction - generate a 2D mask from a polygonsupervision.PolygonZoneclass - to define polygon zones and validate ifsupervision.Detectionsare in the zonesupervision.PolygonZoneAnnotatorclass - to drawsupervision.PolygonZoneon scene
๐ฑ Changed
VideoInfoAPI - change the property nameresolution->resolution_whto make it more descriptive; convertVideoInfotodataclassprocess_frameAPI - change argument nameframe->sceneto make it consistent with other classes and methodsLineCounterAPI - rename classLineCounter->LineZoneto make it consistent withPolygonZoneLineCounterAnnotatorAPI - rename classLineCounterAnnotator->LineZoneAnnotator
๐ Contributors
- @SkalskiP
- @capjamesg
- Python
Published by capjamesg about 3 years ago
supervision - supervision-0.1.0
๐ Added
- โ Add project license
- ๐จ
DEFAULT_COLOR_PALETTE,Color, andColorPaletteclasses - ๐ initial implementation of
Point,Vector, andRectclasses - ๐ฌ
VideoInfoandVideoSinkclasses as well asget_video_frames_generator-๐show_frame_in_notebookutil - ๐๏ธ
draw_line,draw_rectangle,draw_filled_rectangleutils added - ๐ฆ Initial version
DetectionsandBoxAnnotatoradded - ๐งฎ initial implementation of
LineCounterandLineCounterAnnotatorclasses
๐ Contributors
@SkalskiP
- Python
Published by SkalskiP about 3 years ago