detection_inference

A high-performance, multi-threaded C++ pipeline for real-time multi-camera object detection using YOLOv8.

https://github.com/henriktrom/detection_inference

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (15.0%) to scientific vocabulary

Keywords

cpp multithreading object-detection real-time tensorrt-inference yolov8
Last synced: 6 months ago · JSON representation ·

Repository

A high-performance, multi-threaded C++ pipeline for real-time multi-camera object detection using YOLOv8.

Basic Info
  • Host: GitHub
  • Owner: HenrikTrom
  • License: cc0-1.0
  • Language: C++
  • Default Branch: main
  • Homepage:
  • Size: 7.31 MB
Statistics
  • Stars: 0
  • Watchers: 0
  • Forks: 0
  • Open Issues: 0
  • Releases: 1
Topics
cpp multithreading object-detection real-time tensorrt-inference yolov8
Created 9 months ago · Last pushed 9 months ago
Metadata Files
Readme License Citation

Readme.md

🚀 Detection-Inference

DOI

A high-performance, multi-threaded C++ pipeline for real-time multi-camera object detection using YOLOv8.

Developed as part of my PhD thesis to enable 3D object detection and generate proposals for my keypoint inference pipeline.

This module supports deployment in robotic systems for real-time tracking and perception and is part of my ROS/ROS2 real-time 3D tracker and its docker-implementation.

System Setup

🧪 Test results

  • Intel(R) Xeon(R) W-2145 CPU @ 3.70GHz, Nvidia 2080 super, Ubuntu 20.04, CUDA 11.8, TensorRT 8.6.1.6, OpenCV 4.10.0 with Yolov8 and BATCH_SIZE of 5 -> Preprocess: ~2ms, NN inference ~7ms, Postprocess: ~5ms (1000 samples)
  • AMD Ryzen 9 7900X3D CPU @ 4.40GHz, Nvidia 4070 super, Ubuntu 20.04, CUDA 12.4, TensorRT 10.9.0.34, OpenCV 4.10.0 with Yolov8 and BATCH_SIZE of 5 -> Preprocess: <1ms, NN inference ~3ms, Postprocess: ~<1ms (1000 samples)

📑 Citation

If you use this software, please use the GitHub “Cite this repository” button at the top(-right) of this page.

Environment

This repository is designed to run inside the Docker 🐳 container provided here:
OpenCV-TRT-DEV

It includes all necessary dependencies (CUDA, cuDNN, OpenCV, TensorRT, CMake).

Prerequisites

In addition to the libraries installed in the container, this project relies on:

Environment Variables

Set the required variables (usually done via .env or your shell):

bash OPENCV_VERSION=4.10.0 # Your installed OpenCV version N_CAMERAS=5 # Optional: sets system-wide batch size

If N_CAMERAS is not set, CMake will default to a batch size of 5.

Use the trt.sh script in ./scripts to convert your .onnx model to a fixed batch size.

Notes

  • The batch size is treated as a hardware constraint, defined by the number of connected cameras.
  • You can change the default batch size in CMakeLists.txt to fit your system.
  • Although this repo is optimized for YOLOv8 models, you can modify the post-processing stage to support any ONNX-compatible detection model.

Installation

Run the provided installation script:

bash sudo ./build_install.sh

This will configure the build system, compile the inference pipeline, and generate the binaries.


🧠 Model Requirements

This repo is designed for trained YOLOv8 .onnx models. The model must be exported with a fixed batch size to match the number of cameras used in your setup.

Adapt the configuration files in the cfg/ folder to reflect your system and model setup.


Executables

Benchmark

After configuring your setup:

bash ./build/inference_benchmark

This runs the inference pipeline, processes multi-camera input, and saves images with overlayed bounding boxes and labels to the inputs/ folder.

Video Inference Export

This executable iterates over a directory of synchronized .mp4 videos and saves the result for each video in a .json file.

This example usage assumes .mp4 videos in an arbitrary ./test directory

bash ./build/video_inference_export test

BBox Overlay

This executable iterates over a directory of synchronized .mp4 videos and exported inference results (from ./build/video_inference_export). It generates new .mp4 videos with detections and a tiled video similar to the .gif in this readme.

This example usage assumes .mp4 videos and .json files in an arbitrary ./test directory

bash ./build/bbox_overlay test


📷 Applications

This inference module is optimized for:

  • Real-time multi-camera tracking
  • Robotics & embedded systems
  • Preprocessing for downstream pipelines (e.g. keypoint tracking)

Owner

  • Name: Henrik
  • Login: HenrikTrom
  • Kind: user
  • Company: Göttingen University

👋 Hi, I'm Henrik — PhD researcher with a focus on real-time 3D tracking and software development for human-robot interaction.

Citation (Citation.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
title: "detection_inference"
version: 1.0.0
doi: 10.5281/zenodo.15527774  
date-released: 2025-05-27
license: CC0-1.0 
url: https://github.com/HenrikTrom/detection_inference
repository-code: https://github.com/HenrikTrom/detection_inference
abstract: "A high-performance, multi-threaded C++ pipeline for real-time multi-camera object detection."
authors:
  - family-names: Trommer
    given-names: Henrik
    orcid: https://orcid.org/0009-0002-3110-0963
    affiliation: University of Göttingen
keywords:
  - real-time
  - object-detection
  - multi-threading
  - yolo
  - c++
  - research software
  - open source

GitHub Events

Total
  • Release event: 1
  • Push event: 3
  • Create event: 3
Last Year
  • Release event: 1
  • Push event: 3
  • Create event: 3