deeplabcut-live

SDK for running DeepLabCut on a live video stream

https://github.com/deeplabcut/deeplabcut-live

Science Score: 77.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 2 DOI reference(s) in README
  • Academic publication links
    Links to: biorxiv.org, nature.com, plos.org
  • Committers with academic emails
    3 of 13 committers (23.1%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.5%) to scientific vocabulary

Keywords

deeplabcut optogenetics pose-estimation real-time tracking-algorithm

Keywords from Contributors

mesh open-science interpretability sequences generic projection interactive optim hacking yolov5
Last synced: 6 months ago · JSON representation ·

Repository

SDK for running DeepLabCut on a live video stream

Basic Info
Statistics
  • Stars: 210
  • Watchers: 16
  • Forks: 51
  • Open Issues: 9
  • Releases: 7
Topics
deeplabcut optogenetics pose-estimation real-time tracking-algorithm
Created almost 6 years ago · Last pushed 8 months ago
Metadata Files
Readme License Citation

README.md

DeepLabCut-live! SDKDLC LIVE!

Code style: black PyPI - Python Version Downloads Downloads Python package GitHub stars GitHub forks Image.sc forum Gitter Twitter Follow

This package contains a DeepLabCut inference pipeline for real-time applications that has minimal (software) dependencies. Thus, it is as easy to install as possible (in particular, on atypical systems like NVIDIA Jetson boards).

Performance: If you would like to see estimates on how your model should perform given different video sizes, neural network type, and hardware, please see: https://deeplabcut.github.io/DLC-inferencespeed-benchmark/

If you have different hardware, please consider submitting your results too! https://github.com/DeepLabCut/DLC-inferencespeed-benchmark

What this SDK provides: This package provides a DLCLive class which enables pose estimation online to provide feedback. This object loads and prepares a DeepLabCut network for inference, and will return the predicted pose for single images.

To perform processing on poses (such as predicting the future pose of an animal given it's current pose, or to trigger external hardware like send TTL pulses to a laser for optogenetic stimulation), this object takes in a Processor object. Processor objects must contain two methods: process and save.

  • The process method takes in a pose, performs some processing, and returns processed pose.
  • The save method saves any valuable data created by or used by the processor

For more details and examples, see documentation here.

🔥🔥🔥🔥🔥 Note :: alone, this object does not record video or capture images from a camera. This must be done separately, i.e. see our DeepLabCut-live GUI.🔥🔥🔥

News!

Installation:

Please see our instruction manual to install on a Windows or Linux machine or on a NVIDIA Jetson Development Board. Note, this code works with tensorflow (TF) 1 or TF 2 models, but TF requires that whatever version you exported your model with, you must import with the same version (i.e., export with TF1.13, then use TF1.13 with DlC-Live; export with TF2.3, then use TF2.3 with DLC-live).

  • available on pypi as: pip install deeplabcut-live

Note, you can then test your installation by installing poetry (pip install poetry), then running:

python poetry run dlc-live-test

If installed properly, this script will i) create a temporary folder ii) download the SuperAnimal-Quadruped model from the DeepLabCut Model Zoo, iii) download a short video clip of a dog, and iv) run inference while displaying keypoints. v) remove the temporary folder.

DLC LIVE TEST

Quick Start: instructions for use:

  1. Initialize Processor (if desired)
  2. Initialize the DLCLive object
  3. Perform pose estimation!

python from dlclive import DLCLive, Processor dlc_proc = Processor() dlc_live = DLCLive(<path to exported model directory>, processor=dlc_proc) dlc_live.init_inference(<your image>) dlc_live.get_pose(<your image>)

DLCLive parameters:

  • path = string; full path to the exported DLC model directory
  • model_type = string; the type of model to use for inference. Types include:
    • base = the base DeepLabCut model
    • tensorrt = apply tensor-rt optimizations to model
    • tflite = use tensorflow lite inference (in progress...)
  • cropping = list of int, optional; cropping parameters in pixel number: [x1, x2, y1, y2]
  • dynamic = tuple, optional; defines parameters for dynamic cropping of images
    • index 0 = use dynamic cropping, bool
    • index 1 = detection threshold, float
    • index 2 = margin (in pixels) around identified points, int
  • resize = float, optional; factor by which to resize image (resize=0.5 downsizes both width and height of image by half). Can be used to downsize large images for faster inference
  • processor = dlc pose processor object, optional
  • display = bool, optional; display processed image with DeepLabCut points? Can be used to troubleshoot cropping and resizing parameters, but is very slow

DLCLive inputs:

  • <path to exported model directory> = path to the folder that has the .pb files that you acquire after running deeplabcut.export_model
  • <your image> = is a numpy array of each frame

Benchmarking/Analyzing your exported DeepLabCut models

DeepLabCut-live offers some analysis tools that allow users to peform the following operations on videos, from python or from the command line:

  1. Test inference speed across a range of image sizes, downsizing images by specifying the resize or pixels parameter. Using the pixels parameter will resize images to the desired number of pixels, without changing the aspect ratio. Results will be saved (along with system info) to a pickle file if you specify an output directory.

    python

    python dlclive.benchmark_videos('/path/to/exported/model', ['/path/to/video1', '/path/to/video2'], output='/path/to/output', resize=[1.0, 0.75, '0.5'])

    command line

    dlc-live-benchmark /path/to/exported/model /path/to/video1 /path/to/video2 -o /path/to/output -r 1.0 0.75 0.5

  2. Display keypoints to visually inspect the accuracy of exported models on different image sizes (note, this is slow and only for testing purposes):

python

python dlclive.benchmark_videos('/path/to/exported/model', '/path/to/video', resize=0.5, display=True, pcutoff=0.5, display_radius=4, cmap='bmy')

command line

dlc-live-benchmark /path/to/exported/model /path/to/video -r 0.5 --display --pcutoff 0.5 --display-radius 4 --cmap bmy

  1. Analyze and create a labeled video using the exported model and desired resize parameters. This option functions similar to deeplabcut.benchmark_videos and deeplabcut.create_labeled_video (note, this is slow and only for testing purposes).
python

python dlclive.benchmark_videos('/path/to/exported/model', '/path/to/video', resize=[1.0, 0.75, 0.5], pcutoff=0.5, display_radius=4, cmap='bmy', save_poses=True, save_video=True)

command line

dlc-live-benchmark /path/to/exported/model /path/to/video -r 0.5 --pcutoff 0.5 --display-radius 4 --cmap bmy --save-poses --save-video

License:

This project is licensed under the GNU AGPLv3. Note that the software is provided "as is", without warranty of any kind, express or implied. If you use the code or data, we ask that you please cite us! This software is available for licensing via the EPFL Technology Transfer Office (https://tto.epfl.ch/, info.tto@epfl.ch).

Community Support, Developers, & Help:

This is an actively developed package and we welcome community development and involvement.

  • If you want to contribute to the code, please read our guide here, which is provided at the main repository of DeepLabCut.

  • We are a community partner on the Image.sc forum. Please post help and support questions on the forum with the tag DeepLabCut. Check out their mission statement Scientific Community Image Forum: A discussion forum for scientific image software.

  • If you encounter a previously unreported bug/code issue, please post here (we encourage you to search issues first): https://github.com/DeepLabCut/DeepLabCut-live/issues

  • For quick discussions here: Gitter

Reference:

If you utilize our tool, please cite Kane et al, eLife 2020. The preprint is available here: https://www.biorxiv.org/content/10.1101/2020.08.04.236422v2

@Article{Kane2020dlclive, author = {Kane, Gary and Lopes, Gonçalo and Sanders, Jonny and Mathis, Alexander and Mathis, Mackenzie}, title = {Real-time, low-latency closed-loop feedback using markerless posture tracking}, journal = {eLife}, year = {2020}, }

Owner

  • Name: DeepLabCut
  • Login: DeepLabCut
  • Kind: organization
  • Email: admin@deeplabcut.org
  • Location: EPFL

DeepLabCut, a software package for animal pose estimation. Created by the A. and M.W. Mathis Labs

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: >-
  Real-time, low-latency closed-loop feedback using
  markerless posture tracking
message: >-
  If you utilize our tool, please [cite Kane et al,
  eLife
  2020](https://elifesciences.org/articles/61909).
  The preprint is available here:
  https://www.biorxiv.org/content/10.1101/2020.08.04.236422v2
type: article
authors:
  - given-names: Gary
    name-particle: A
    family-names: Kane
    affiliation: >-
      The Rowland Institute at Harvard, Harvard
      University, Cambridge, United States
  - given-names: Gonçalo
    family-names: Lopes
    affiliation: 'NeuroGEARS Ltd, London, United Kingdom'
  - given-names: Jonny
    name-particle: L
    family-names: Saunders
    affiliation: >-
      Institute of Neuroscience, Department of
      Psychology, University of Oregon, Eugene,
      United States
  - given-names: Alexander
    family-names: Mathis
    affiliation: >-
      The Rowland Institute at Harvard, Harvard
      University, Cambridge, United States; Center
      for Neuroprosthetics, Center for Intelligent
      Systems, & Brain Mind Institute, School of Life
      Sciences, Swiss Federal Institute of Technology
      (EPFL), Lausanne, Switzerland
  - given-names: Mackenzie
    name-particle: W
    family-names: Mathis
    affiliation: >-
      The Rowland Institute at Harvard, Harvard
      University, Cambridge, United States; Center
      for Neuroprosthetics, Center for Intelligent
      Systems, & Brain Mind Institute, School of Life
      Sciences, Swiss Federal Institute of Technology
      (EPFL), Lausanne, Switzerland
    email: mackenzie.mathis@epfl.ch
date-released: 2020-08-05
doi: "10.7554/eLife.61909"
license: "AGPL-3.0-or-later"
version:  "1.0.3"

GitHub Events

Total
  • Issues event: 6
  • Watch event: 18
  • Delete event: 8
  • Issue comment event: 8
  • Push event: 84
  • Pull request review comment event: 2
  • Pull request review event: 5
  • Pull request event: 13
  • Fork event: 4
  • Create event: 12
Last Year
  • Issues event: 6
  • Watch event: 18
  • Delete event: 8
  • Issue comment event: 8
  • Push event: 84
  • Pull request review comment event: 2
  • Pull request review event: 5
  • Pull request event: 13
  • Fork event: 4
  • Create event: 12

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 110
  • Total Committers: 13
  • Avg Commits per committer: 8.462
  • Development Distribution Score (DDS): 0.664
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Mackenzie Mathis m****s@r****u 37
AlexEMG a****r@d****g 30
Gary Kane g****e@r****u 21
Jessy Lauer 3****u 7
dependabot[bot] 4****] 5
Jonny Saunders J****7@g****m 3
yunsang Song s****6@n****m 1
ramonhollands r****s@g****m 1
hausmanns 3****s 1
haruna w****w@y****p 1
Steffen Schneider s****n@b****g 1
Niels Poulsen n****n@e****h 1
Feri73 f****d@g****m 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 67
  • Total pull requests: 71
  • Average time to close issues: 4 months
  • Average time to close pull requests: 3 months
  • Total issue authors: 54
  • Total pull request authors: 17
  • Average comments per issue: 2.52
  • Average comments per pull request: 1.46
  • Merged pull requests: 38
  • Bot issues: 0
  • Bot pull requests: 17
Past Year
  • Issues: 1
  • Pull requests: 19
  • Average time to close issues: N/A
  • Average time to close pull requests: 2 days
  • Issue authors: 1
  • Pull request authors: 4
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.84
  • Merged pull requests: 9
  • Bot issues: 0
  • Bot pull requests: 4
Top Authors
Issue Authors
  • MMathisLab (4)
  • gyillikci (3)
  • Alberto100591 (2)
  • SteveGaukrodger (2)
  • sdiomedi (2)
  • andrewwong09 (2)
  • ReiniertB (2)
  • nishata24 (2)
  • tomsains (2)
  • Elaheh-mot (2)
  • darioringach (1)
  • Ooo0 (1)
  • AsRaNi1 (1)
  • FabianPlum (1)
  • frbouad (1)
Pull Request Authors
  • dependabot[bot] (17)
  • MMathisLab (12)
  • sneakers-the-rat (9)
  • gkane26 (7)
  • maximpavliv (5)
  • AlexEMG (4)
  • n-poulsen (4)
  • eggplants (3)
  • Feri73 (2)
  • urlicht (1)
  • jeylau (1)
  • songys96 (1)
  • LucZot (1)
  • stes (1)
  • TrellixVulnTeam (1)
Top Labels
Issue Labels
bug (3) documentation (3) enhancement (3) installation (3) how_to_use_dlclive (2) help wanted (1) jetson (1) tensorflow (1)
Pull Request Labels
dependencies (19) enhancement (4) python (4) DLC 3.0 🔥 (3) how_to_use_dlclive (1)

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 362 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 1
  • Total versions: 10
  • Total maintainers: 2
pypi.org: deeplabcut-live

Class to load exported DeepLabCut networks and perform pose estimation on single frames (from a camera feed)

  • Versions: 10
  • Dependent Packages: 0
  • Dependent Repositories: 1
  • Downloads: 362 Last month
Rankings
Stargazers count: 5.6%
Forks count: 6.2%
Dependent packages count: 10.0%
Average: 11.1%
Downloads: 12.2%
Dependent repos count: 21.7%
Maintainers (2)
Last synced: 7 months ago

Dependencies

poetry.lock pypi
  • absl-py 1.0.0
  • astunparse 1.6.3
  • cached-property 1.5.2
  • cachetools 5.0.0
  • certifi 2021.10.8
  • charset-normalizer 2.0.12
  • colorama 0.4.4
  • colorcet 3.0.0
  • flatbuffers 2.0
  • gast 0.5.3
  • google-auth 2.6.2
  • google-auth-oauthlib 0.4.6
  • google-pasta 0.2.0
  • grpcio 1.45.0
  • h5py 3.6.0
  • idna 3.3
  • importlib-metadata 4.11.3
  • keras 2.8.0
  • keras-preprocessing 1.1.2
  • libclang 13.0.0
  • markdown 3.3.6
  • numexpr 2.8.1
  • numpy 1.21.5
  • oauthlib 3.2.0
  • opencv-python-headless 4.5.5.64
  • opt-einsum 3.3.0
  • packaging 21.3
  • pandas 1.3.5
  • param 1.12.1
  • pillow 9.0.1
  • protobuf 3.19.4
  • py-cpuinfo 8.0.0
  • pyasn1 0.4.8
  • pyasn1-modules 0.2.8
  • pyct 0.4.8
  • pyparsing 3.0.7
  • python-dateutil 2.8.2
  • pytz 2022.1
  • requests 2.27.1
  • requests-oauthlib 1.3.1
  • rsa 4.8
  • ruamel.yaml 0.17.21
  • ruamel.yaml.clib 0.2.6
  • six 1.16.0
  • tables 3.7.0
  • tensorboard 2.8.0
  • tensorboard-data-server 0.6.1
  • tensorboard-plugin-wit 1.8.1
  • tensorflow 2.8.1
  • tensorflow-estimator 2.8.0
  • tensorflow-io-gcs-filesystem 0.24.0
  • termcolor 1.1.0
  • tqdm 4.63.1
  • typing-extensions 4.1.1
  • urllib3 1.26.9
  • werkzeug 2.1.0
  • wrapt 1.14.0
  • zipp 3.7.0
pyproject.toml pypi
  • Pillow >=8.0.0
  • colorcet ^3.0.0
  • numpy ^1.20
  • opencv-python-headless ^4.5
  • pandas ^1.3
  • py-cpuinfo >=5.0.0
  • python >=3.7.1,<3.11
  • ruamel.yaml ^0.17.20
  • tables ^3.6
  • tensorflow ^2.7.0
  • tqdm ^4.62.3
.github/workflows/python-package.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/testing.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v3 composite