https://github.com/autodistill/autodistill-transformers

Use object detection models in Hugging Face Transformers to automatically label data to train a fine-tuned model.

https://github.com/autodistill/autodistill-transformers

Science Score: 13.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.4%) to scientific vocabulary

Keywords

computer-vision object-detection
Last synced: 5 months ago · JSON representation

Repository

Use object detection models in Hugging Face Transformers to automatically label data to train a fine-tuned model.

Basic Info
Statistics
  • Stars: 1
  • Watchers: 3
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
computer-vision object-detection
Created over 2 years ago · Last pushed about 2 years ago
Metadata Files
Readme License

README.md

Autodistill Transformers Module

This repository contains the code supporting the Transformers models model for use with Autodistill.

Transformers, maintained by Hugging Face, features a range of state of the art models for Natural Language Processing (NLP), computer vision, and more.

This package allows you to write a function that calls a Transformers object detection model and use it to automatically label data. You can use this data to train a fine-tuned model using an architecture supported by Autodistill (i.e. YOLOv8, YOLOv5, or DETR).

Read the full Autodistill documentation.

Installation

To use Transformers with autodistill, you need to install the following dependency:

bash pip3 install autodistill-transformers

Quickstart

The following example shows how to use the Transformers module to label images using the Owlv2ForObjectDetection model.

You can update the inference() functon to use any object detection model supported in the Transformers library.

```python import cv2 import torch from autodistill.detection import CaptionOntology from autodistill.utils import plot from transformers import OwlViTForObjectDetection, OwlViTProcessor

from autodistill_transformers import TransformersModel

processor = OwlViTProcessor.frompretrained("google/owlvit-base-patch32") model = OwlViTForObjectDetection.frompretrained("google/owlvit-base-patch32")

def inference(image, prompts): inputs = processor(text=prompts, images=image, return_tensors="pt") outputs = model(**inputs)

target_sizes = torch.Tensor([image.size[::-1]])

results = processor.post_process_object_detection(
    outputs=outputs, target_sizes=target_sizes, threshold=0.1
)[0]

return results

base_model = TransformersModel( ontology=CaptionOntology( { "a photo of a person": "person", "a photo of a cat": "cat", } ), callback=inference, )

run inference

results = base_model.predict("image.jpg", confidence=0.1)

print(results)

plot results

plot( image=cv2.imread("image.jpg"), detections=results, classes=base_model.ontology.classes(), )

label a directory of images

basemodel.label("./contextimages", extension=".jpeg") ```

License

This project is licensed under an MIT license.

🏆 Contributing

We love your input! Please see the core Autodistill contributing guide to get started. Thank you 🙏 to all our contributors!

Owner

  • Name: Autodistill
  • Login: autodistill
  • Kind: organization
  • Email: autodistill@roboflow.com

Use bigger slower models to train smaller faster ones

GitHub Events

Total
Last Year

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 5
  • Total Committers: 1
  • Avg Commits per committer: 5.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
James Gallagher j****g@j****g 5
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 8 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 29 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 2
  • Total maintainers: 1
pypi.org: autodistill-transformers

Use object detection models in Hugging Face Transformers to automatically label data to train a fine-tuned model.

  • Versions: 2
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 29 Last month
Rankings
Dependent packages count: 9.6%
Average: 38.8%
Dependent repos count: 67.9%
Maintainers (1)
Last synced: 7 months ago