inference

Turn any computer or edge device into a command center for your computer vision projects.

https://github.com/roboflow/inference

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
    7 of 67 committers (10.4%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.8%) to scientific vocabulary

Keywords

agents classification computer-vision deployment docker inference inference-api inference-server instance-segmentation jetson machine-learning object-detection onnx python tensorrt vit yolo11 yolov12 yolov5 yolov8

Keywords from Contributors

multimodal segment-anything transformers fine-tuning foundation-models image-annotation labeling-tool vqa vision-and-language qwen2-vl
Last synced: 4 months ago · JSON representation ·

Repository

Turn any computer or edge device into a command center for your computer vision projects.

Basic Info
Statistics
  • Stars: 1,899
  • Watchers: 25
  • Forks: 204
  • Open Issues: 77
  • Releases: 137
Topics
agents classification computer-vision deployment docker inference inference-api inference-server instance-segmentation jetson machine-learning object-detection onnx python tensorrt vit yolo11 yolov12 yolov5 yolov8
Created over 2 years ago · Last pushed 4 months ago
Metadata Files
Readme Contributing License Citation Codeowners Agents

README.md


[notebooks](https://github.com/roboflow/notebooks) | [supervision](https://github.com/roboflow/supervision) | [autodistill](https://github.com/autodistill/autodistill) | [maestro](https://github.com/roboflow/multimodal-maestro)
[![version](https://badge.fury.io/py/inference.svg)](https://badge.fury.io/py/inference) [![downloads](https://img.shields.io/pypi/dm/inference)](https://pypistats.org/packages/inference) [![docker pulls](https://img.shields.io/docker/pulls/roboflow/roboflow-inference-server-cpu)](https://hub.docker.com/u/roboflow) [![license](https://img.shields.io/pypi/l/inference)](https://github.com/roboflow/inference/blob/main/LICENSE.core)

Make Any Camera an AI Camera

Inference turns any computer or edge device into a command center for your computer vision projects.

  • 🛠️ Self-host your own fine-tuned models
  • 🧠 Access the latest and greatest foundation models (like Florence-2, CLIP, and SAM2)
  • 🤝 Use Workflows to track, count, time, measure, and visualize
  • 👁️ Combine ML with traditional CV methods (like OCR, Barcode Reading, QR, and template matching)
  • 📈 Monitor, record, and analyze predictions
  • 🎥 Manage cameras and video streams
  • 📬 Send notifications when events happen
  • 🛜 Connect with external systems and APIs
  • 🔗 Extend with your own code and models
  • 🚀 Deploy production systems at scale

See Example Workflows for common use-cases like detecting small objects with SAHI, multi-model consensus, active learning, reading license plates, blurring faces, background removal, and more.

Time In Zone Workflow Example

🔥 quickstart

Install Docker (and NVIDIA Container Toolkit for GPU acceleration if you have a CUDA-enabled GPU). Then run

pip install inference-cli && inference server start --dev

This will pull the proper image for your machine and start it in development mode.

In development mode, a Jupyter notebook server with a quickstart guide runs on http://localhost:9001/notebook/start. Dive in there for a whirlwind tour of your new Inference Server's functionality!

Now you're ready to connect your camera streams and start building & deploying Workflows in the UI or interacting with your new server via its API.

🛠️ build with Workflows

A key component of Inference is Workflows, composable blocks of common functionality that give models a common interface to make chaining and experimentation easy.

License Plate OCR Workflow Visualization

With Workflows, you can: * Detect, classify, and segment objects in images using state-of-the-art models. * Use Large Multimodal Models (LMMs) to make determinations at any stage in a workflow. * Seamlessly swap out models for a given task. * Chain models together. * Track, count, time, measure, and visualize objects. * Add business logic and extend functionality to work with your external systems.

Workflows allow you to extend simple model predictions to build computer vision micro-services that fit into a larger application or fully self-contained visual agents that run on a video stream.

Learn more, read the Workflows docs, or start building.

Self Checkout with Workflows Tutorial: Build an AI-Powered Self-Serve Checkout
Created: 2 Feb 2025

Make a computer vision app that identifies different pieces of hardware, calculates the total cost, and records the results to a database.
Workflows Tutorial Tutorial: Intro to Workflows
Created: 6 Jan 2025

Learn how to build and deploy Workflows for common use-cases like detecting vehicles, filtering detections, visualizing results, and calculating dwell time on a live video stream.
Smart Parking with AI Tutorial: Build a Smart Parking System
Created: 27 Nov 2024

Build a smart parking lot management system using Roboflow Workflows! This tutorial covers license plate detection with YOLOv8, object tracking with ByteTrack, and real-time notifications with a Telegram bot.

📟 connecting via api

Once you've installed Inference, your machine is a fully-featured CV center. You can use its API to run models and workflows on images and video streams. By default, the server is running locally on localhost:9001.

To interface with your server via Python, use our SDK:

pip install inference-sdk

Then run an example model comparison Workflow like this:

```python from inference_sdk import InferenceHTTPClient

client = InferenceHTTPClient( apiurl="http://localhost:9001", # use local inference server # apikey="" # optional to access your private data and models )

result = client.runworkflow( workspacename="roboflow-docs", workflow_id="model-comparison", images={ "image": "https://media.roboflow.com/workflows/examples/bleachers.jpg" }, parameters={ "model1": "yolov8n-640", "model2": "yolov11n-640" } )

print(result) ```

In other languages, use the server's REST API; you can access the API docs for your server at /docs (OpenAPI format) or /redoc (Redoc Format).

Check out the inference_sdk docs to see what else you can do with your new server.

🎥 connect to video streams

The inference server is a video processing beast. You can set it up to run Workflows on RTSP streams, webcam devices, and more. It will handle hardware acceleration, multiprocessing, video decoding and GPU batching to get the most out of your hardware.

This example workflow will watch a stream for frames that CLIP thinks match an inputted text prompt. ```python from inference_sdk import InferenceHTTPClient import atexit import time

max_fps = 4

client = InferenceHTTPClient( apiurl="http://localhost:9001", # use local inference server # apikey="" # optional to access your private data and models )

Start a stream on an rtsp stream

result = client.startinferencepipelinewithworkflow( videoreference=["rtsp://user:password@192.168.0.100:554/"], workspacename="roboflow-docs", workflowid="clip-frames", maxfps=maxfps, workflowsparameters={ "prompt": "blurry", # change to look for something else "threshold": 0.16 } )

pipeline_id = result["context"]["pipeline_id"]

Terminate the pipeline when the script exits

atexit.register(lambda: client.terminateinferencepipeline(pipeline_id))

while True: result = client.consumeinferencepipelineresult(pipelineid=pipeline_id)

if not result["outputs"] or not result["outputs"][0]: # still initializing continue

output = result["outputs"][0] ismatch = output.get("ismatch") similarity = round(output.get("similarity")*100, 1) print(f"Matches prompt? {is_match} (similarity: {similarity}%)")

time.sleep(1/max_fps) ```

Pipeline outputs can be consumed via API for downstream processing or the Workflow can be configured to call external services with Notification blocks (like Email or Twilio) or the Webhook block. For more info on video pipeline management, see the Video Processing overview.

If you have a Roboflow account & have linked an API key, you can also remotely monitor and manage your running streams via the Roboflow UI.

🔑 connect to the cloud

Without an API Key, you can access a wide range of pre-trained and foundational models and run public Workflows.

Pass an optional Roboflow API Key to the inference_sdk or API to access additional features enhanced by Roboflow's Cloud platform. When running with an API Key, usage is metered according to Roboflow's pricing tiers.

| | Open Access | With API Key (Metered) | |-------------------------|-------------|--------------| | Pre-Trained Models | ✅ | ✅ | Foundation Models | ✅ | ✅ | Video Stream Management | ✅ | ✅ | Dynamic Python Blocks | ✅ | ✅ | Public Workflows | ✅ | ✅ | Private Workflows | | ✅ | Fine-Tuned Models | | ✅ | Universe Models | | ✅ | Active Learning | | ✅ | Serverless Hosted API | | ✅ | Dedicated Deployments | | ✅ | Commercial Model Licensing | | Paid | Device Management | | Enterprise | Model Monitoring | | Enterprise

🌩️ hosted compute

If you don't want to manage your own infrastructure for self-hosting, Roboflow offers a hosted Inference Server via one-click Dedicated Deployments (CPU and GPU machines) billed hourly, or simple models and Workflows via our serverless Hosted API billed per API-call.

We offer a generous free-tier to get started.

🖥️ run on-prem or self-hosted

Inference is designed to run on a wide range of hardware from beefy cloud servers to tiny edge devices. This lets you easily develop against your local machine or our cloud infrastructure and then seamlessly switch to another device for production deployment.

inference server start attempts to automatically choose the optimal container to optimize performance on your machine (including with GPU acceleration via NVIDIA CUDA when available). Special installation notes and performance tips by device are listed below:

⭐️ New: Enterprise Hardware

For manufacturing and logistics use-cases Roboflow now offers the NVIDIA Jetson-based Flowbox, a ruggedized CV center pre-configured with Inference and optimized for running in secure networks. It has integrated support for machine vision cameras like Basler and Lucid over GigE, supports interfacing with PLCs and HMIs via OPC or MQTT, enables enterprise device management through a DMZ, and comes with the support of our team of computer vision experts to ensure your project is a success.

📚 documentation

Visit our documentation to explore comprehensive guides, detailed API references, and a wide array of tutorials designed to help you harness the full potential of the Inference package.

© license

The core of Inference is licensed under Apache 2.0.

Models are subject to licensing which respects the underlying architecture. These licenses are listed in inference/models. Paid Roboflow accounts include a commercial license for some models (see roboflow.com/licensing for details).

Cloud connected functionality (like our model and Workflows registries, dataset management, model monitoring, device management, and managed infrastructure) requires a Roboflow account and API key & is metered based on usage.

Enterprise functionality is source-available in inference/enterprise under an enterprise license and usage in production requires an active Enterprise contract in good standing.

See the "Self Hosting and Edge Deployment" section of the Roboflow Licensing documentation for more information on how Roboflow Inference is licensed.

🏆 contribution

We would love your input to improve Roboflow Inference! Please see our contributing guide to get started. Thank you to all of our contributors! 🙏


Owner

  • Name: Roboflow
  • Login: roboflow
  • Kind: organization
  • Email: hello@roboflow.com
  • Location: United States of America

Citation (CITATION.cff)

cff-version: 1.2.0
title: Roboflow Inference Server
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Roboflow
    email: support@roboflow.com
repository-code: 'https://github.com/roboflow/inference-server'
url: 'https://roboflow.com'
abstract: >-
  An opinionated, easy-to-use inference server for use with
  computer vision models.
keywords:
  - computer vision
  - inference
license: Apache-2.0

Committers

Last synced: 8 months ago

All Time
  • Total Commits: 5,086
  • Total Committers: 67
  • Avg Commits per committer: 75.91
  • Development Distribution Score (DDS): 0.624
Past Year
  • Commits: 3,200
  • Committers: 53
  • Avg Commits per committer: 60.377
  • Development Distribution Score (DDS): 0.707
Top Committers
Name Email Commits
Paweł Pęczek p****l@r****m 1,912
Grzegorz Klimaszewski 1****w 812
Peter Robicheaux p****r@r****m 507
Brad Dwyer b****d@r****m 307
Paul Guerrie p****l@r****m 280
James Gallagher j****g@j****g 159
Thomas Hansen t****n@g****m 89
Rob Miller r****b@r****m 86
Matvezy m****v@t****u 85
Emily Gavrilenko e****e@c****u 69
Sachin Agarwal s****n@b****m 67
SolomonLake l****h@g****m 51
Isaac Robinson i****c@r****m 50
Chandler Supple c****e@g****m 42
PacificDou d****t@g****m 40
Reed Johnson r****s@g****m 38
Nick Herrig n****g@g****m 37
Sam Beran s****n@g****m 36
Shantanu Bala s****u@r****m 36
Piotr Skalski p****2@g****m 31
Balthasar b****r@r****m 30
Alex Norell a****l@r****m 28
Leo Ueno l****o@r****m 25
Daniel Reiff d****2@g****m 18
Skylar Givens s****s@g****m 16
dependabot[bot] 4****] 16
João j****o@r****m 16
Iuri de Silvio i****i@r****m 16
Eddie Ramirez e****e@r****m 16
Chris Doss c****s@g****m 14
and 37 more...

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 176
  • Total pull requests: 2,180
  • Average time to close issues: about 1 month
  • Average time to close pull requests: 4 days
  • Total issue authors: 81
  • Total pull request authors: 80
  • Average comments per issue: 1.77
  • Average comments per pull request: 0.56
  • Merged pull requests: 1,694
  • Bot issues: 0
  • Bot pull requests: 148
Past Year
  • Issues: 59
  • Pull requests: 1,431
  • Average time to close issues: 15 days
  • Average time to close pull requests: 3 days
  • Issue authors: 35
  • Pull request authors: 62
  • Average comments per issue: 1.64
  • Average comments per pull request: 0.45
  • Merged pull requests: 1,047
  • Bot issues: 0
  • Bot pull requests: 133
Top Authors
Issue Authors
  • PawelPeczek-Roboflow (41)
  • grzegorz-roboflow (21)
  • yeldarby (11)
  • EmilyGavrilenko (5)
  • SkalskiP (4)
  • maxvaneck (3)
  • YoungjaeDev (3)
  • probicheaux (3)
  • NickHerrig (3)
  • amankumarchagti (3)
  • sbroberg (2)
  • venkatram-dev (2)
  • pg56714 (2)
  • RossLote (2)
  • dagleaves (2)
Pull Request Authors
  • PawelPeczek-Roboflow (464)
  • grzegorz-roboflow (425)
  • probicheaux (126)
  • yeldarby (105)
  • codeflash-ai[bot] (100)
  • hansent (97)
  • paulguerrie (87)
  • capjamesg (67)
  • bigbitbus (57)
  • dependabot[bot] (48)
  • EmilyGavrilenko (46)
  • robiscoding (43)
  • misrasaurabh1 (41)
  • Matvezy (32)
  • sberan (30)
Top Labels
Issue Labels
bug (65) question (38) enhancement (33) Hacktoberfest 2024 (5) Video Management API issues (4) multiple-contributions-possible (2) technical-difficulties (2) good first issue (2) documentation (1) release 0.19.0 (1)
Pull Request Labels
⚡️ codeflash (100) dependencies (48) codex (32) javascript (30) release 0.21.0 (28) documentation (27) release 0.22.0 (24) release 0.27.0 (24) python (18) release 0.26.0 (18) release 0.20.0 (17) release-branch (16) release 0.19.0 (15) release 0.23.0 (14) release 0.29.0 (14) release 0.28.0 (12) release 0.24.0 (12) release 0.18.0 (12) release 0.34.0 (10) enhancement (8) release 0.17.0 (6) stale (6) sam2 (4) new model (4) release 0.20.1 (2) release 0.35.0 (2) release 0.17.1 (1) help wanted (1) release 0.18.1 (1) bug (1)

Packages

  • Total packages: 9
  • Total downloads:
    • pypi 890,846 last-month
    • npm 13 last-month
  • Total dependent packages: 5
    (may contain duplicates)
  • Total dependent repositories: 12
    (may contain duplicates)
  • Total versions: 1,017
  • Total maintainers: 17
pypi.org: inference

With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.

  • Versions: 148
  • Dependent Packages: 0
  • Dependent Repositories: 11
  • Downloads: 83,972 Last month
Rankings
Stargazers count: 2.4%
Dependent repos count: 4.4%
Downloads: 5.0%
Average: 5.9%
Forks count: 7.7%
Dependent packages count: 10.1%
Last synced: 4 months ago
proxy.golang.org: github.com/roboflow/inference
  • Versions: 129
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 6.4%
Average: 6.6%
Dependent repos count: 6.8%
Last synced: 4 months ago
pypi.org: inference-cli

With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference CLI.

  • Versions: 149
  • Dependent Packages: 3
  • Dependent Repositories: 1
  • Downloads: 299,126 Last month
Rankings
Stargazers count: 2.4%
Forks count: 7.7%
Downloads: 8.5%
Average: 10.0%
Dependent packages count: 10.1%
Dependent repos count: 21.5%
Last synced: 4 months ago
pypi.org: inference-cpu

With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.

  • Versions: 148
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 1,118 Last month
Rankings
Stargazers count: 2.7%
Dependent packages count: 7.5%
Forks count: 9.4%
Downloads: 13.1%
Average: 20.5%
Dependent repos count: 69.8%
Last synced: 4 months ago
npmjs.org: @roboflow/roboflow-red

A visual way to interact with computer vision using Node-RED

  • Versions: 1
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 13 Last month
Rankings
Stargazers count: 3.4%
Forks count: 6.1%
Average: 25.3%
Dependent repos count: 37.5%
Dependent packages count: 54.1%
Last synced: 4 months ago
pypi.org: smart-reid

With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference CLI.

  • Versions: 8
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 45 Last month
Rankings
Dependent packages count: 10.5%
Average: 34.7%
Dependent repos count: 58.9%
Maintainers (1)
Last synced: 4 months ago
pypi.org: inference-sdk

With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.

  • Versions: 138
  • Dependent Packages: 2
  • Dependent Repositories: 0
  • Downloads: 116,240 Last month
Rankings
Dependent packages count: 7.3%
Average: 37.9%
Dependent repos count: 68.5%
Last synced: 4 months ago
pypi.org: inference-gpu

With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.

  • Versions: 148
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 389,385 Last month
Rankings
Dependent packages count: 7.6%
Average: 38.5%
Dependent repos count: 69.4%
Last synced: 4 months ago
pypi.org: inference-core

With no prior knowledge of machine learning or device-specific deployment, you can deploy a computer vision model to a range of devices and environments using Roboflow Inference.

  • Versions: 148
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 960 Last month
Rankings
Dependent packages count: 7.6%
Average: 38.5%
Dependent repos count: 69.4%
Last synced: 4 months ago

Dependencies

requirements/_requirements.txt pypi
  • APScheduler <=3.10.1
  • cython <=3.0.0
  • fastapi <=0.85.1
  • numpy <=1.25.2
  • opencv-python <=4.8.0.76
  • piexif <=1.1.3
  • pillow <=9.5.0
  • prometheus-fastapi-instrumentator <=6.0.0
  • requests <=2.31.0
  • rich <=13.5.2
  • shapely <=2.0.1
requirements/requirements.clip.txt pypi
requirements/requirements.cpu.txt pypi
  • onnxruntime <=1.14.1
requirements/requirements.docs.txt pypi
  • mike ==1.1.2
  • mkdocs ==1.5.2
  • mkdocs-material ==9.2.1
  • mkdocs-swagger-ui-tag ==0.6.3
  • mkdocstrings ==0.22.0
requirements/requirements.gpu.txt pypi
  • onnxruntime-gpu <=1.15.1
requirements/requirements.hosted.txt pypi
  • boto3 <=1.28.23
  • elasticache_auto_discovery <=1.0.0
  • prometheus-fastapi-instrumentator <=6.0.0
  • pymemcache <=4.0.0
requirements/requirements.http.txt pypi
  • fastapi-cprofile <=0.0.2
  • python-multipart <=0.0.6
  • uvicorn <=0.22.0
requirements/requirements.sam.txt pypi
  • rasterio <=1.2.10
  • torch <=2.0.1
  • torchvision <=0.15.2
requirements/requirements.waf.txt pypi
  • metlo *
.github/workflows/docker.cpu.yml actions
  • actions/checkout v3 composite
  • docker/build-push-action v4 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/docker.gpu.udp.yml actions
  • actions/checkout v3 composite
  • docker/build-push-action v4 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/docker.gpu.yml actions
  • actions/checkout v3 composite
  • docker/build-push-action v4 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/docker.jetson.4.5.0.yml actions
  • actions/checkout v3 composite
  • docker/build-push-action v4 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/docker.jetson.4.6.1.yml actions
  • actions/checkout v3 composite
  • docker/build-push-action v4 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/docker.jetson.5.1.1.yml actions
  • actions/checkout v3 composite
  • docker/build-push-action v4 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/publish.pypi.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v2 composite
  • pypa/gh-action-pypi-publish release/v1 composite
examples/inference-client/requirements.txt pypi
  • requests ==2.31.0
  • supervision ==0.13.0
examples/inference-dashboard-example/requirements.txt pypi
  • Requests ==2.31.0
  • matplotlib ==3.7.1
  • opencv_python ==4.7.0.72
  • pandas ==2.0.2
requirements/requirements.gaze.txt pypi
  • mediapipe >=0.9,<0.11
.github/workflows/docker.device_manager.yml actions
  • actions/checkout v3 composite
  • docker/build-push-action v4 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/docs.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.github/workflows/test.cpu.yml actions
  • actions/checkout v3 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/test.jetson_4.5.0.yml actions
  • actions/checkout v3 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/test.jetson_4.6.1.yml actions
  • actions/checkout v3 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/test.jetson_5.1.1.yml actions
  • actions/checkout v3 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/test.nvidia_t4.yml actions
  • actions/checkout v3 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/test.start_nvidia_t4.yml actions
  • google-github-actions/auth v1 composite
  • google-github-actions/setup-gcloud v1 composite
.github/workflows/test.stop_nvidia_t4.yml actions
  • google-github-actions/auth v1 composite
  • google-github-actions/setup-gcloud v1 composite
inference/landing/package-lock.json npm
  • 361 dependencies
inference/landing/package.json npm
  • @headlessui/react ^1.7.17 development
  • @headlessui/tailwindcss ^0.2.0 development
  • @types/node latest development
  • @types/react latest development
  • @types/react-dom latest development
  • @types/react-syntax-highlighter ^15.5.8 development
  • autoprefixer latest development
  • classnames ^2.3.2 development
  • eslint latest development
  • eslint-config-next latest development
  • postcss latest development
  • react-syntax-highlighter ^15.5.0 development
  • sass ^1.68.0 development
  • tailwindcss latest development
  • typescript latest development
  • next latest
  • react latest
  • react-dom latest
examples/clip-search-engine/requirements.txt pypi
  • Pillow *
  • faiss-cpu *
  • flask *
  • requests *
examples/sam-client/requirements.txt pypi
  • opencv-python *
  • supervision *
requirements/requirements.cli.txt pypi
  • docker ==6.1.3
  • requests <=2.31.0
  • typer ==0.9.0
requirements/requirements.device_manager.txt pypi
  • docker ==6.1.3 development
  • uvicorn <=0.22.0 development
requirements/requirements.doctr.txt pypi
  • python-doctr *
  • tf2onnx *
requirements/requirements.groundingdino.txt pypi
  • rf_groundingdino *
requirements/requirements.sdk.http.txt pypi
  • dataclasses-json >=0.6.0
  • numpy >=1.20.0
  • opencv-python >=4.8.0.0
  • pillow >=9.0.0
  • requests >=2.0.0
  • requests >=2.27.0
  • supervision <1.0.0
.github/workflows/docker.cpu.parallel.yml actions
  • actions/checkout v3 composite
  • docker/build-push-action v4 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/docker.cpu.slim.yml actions
  • actions/checkout v3 composite
  • docker/build-push-action v4 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/docker.gpu.parallel.yml actions
  • actions/checkout v3 composite
  • docker/build-push-action v4 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/docker.gpu.slim.yml actions
  • actions/checkout v3 composite
  • docker/build-push-action v4 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/docker.stream_management_api.yml actions
  • actions/checkout v3 composite
  • docker/build-push-action v4 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/docker.stream_manager.cpu.yml actions
  • actions/checkout v3 composite
  • docker/build-push-action v4 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/docker.stream_manager.gpu.yml actions
  • actions/checkout v3 composite
  • docker/build-push-action v4 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
.github/workflows/docker.stream_manager.jetson.5.1.1.yml actions
  • actions/checkout v3 composite
  • docker/build-push-action v4 composite
  • docker/login-action v2 composite
  • docker/setup-buildx-action v2 composite
  • docker/setup-qemu-action v2 composite
requirements/requirements.cogvlm.txt pypi
  • accelerate <=0.25.0
  • bitsandbytes <=0.41.2.post2
  • einops <=0.7.0
  • sentencepiece <=0.1.99
  • transformers <=4.35.2
  • xformers <=0.0.22
requirements/requirements.parallel.txt pypi
  • celery *
  • gunicorn *
requirements/requirements.test.integration.txt pypi
  • pillow * test
  • pytest * test
  • python-dotenv <=2.0.0 test
  • requests * test
  • requests_toolbelt * test
requirements/requirements.test.unit.txt pypi
  • black * test
  • flake8 * test
  • httpx * test
  • isort * test
  • pillow * test
  • pytest * test
  • pytest-asyncio <=0.21.1 test
  • pytest-timeout >=2.2.0 test
  • python-dotenv <=2.0.0 test
  • requests * test
  • requests-mock ==1.11.0 test
  • requests_toolbelt * test
  • rich * test
  • uvicorn >=0.24.0 test