OmniTrax
OmniTrax: A deep learning-driven multi-animal tracking and pose-estimation add-on for Blender - Published in JOSS (2024)
Science Score: 93.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 10 DOI reference(s) in README and JOSS metadata -
✓Academic publication links
Links to: joss.theoj.org, zenodo.org -
○Committers with academic emails
-
○Institutional organization owner
-
✓JOSS paper metadata
Published in Journal of Open Source Software
Keywords
Scientific Fields
Repository
Deep learning-driven multi animal tracking and pose estimation add-on for Blender
Basic Info
Statistics
- Stars: 38
- Watchers: 3
- Forks: 4
- Open Issues: 1
- Releases: 14
Topics
Metadata Files
README.md
Deep learning-based multi animal tracking and pose estimation Blender Add-on.
![]()
automated multi animal tracking example (trained on synthetic data)
OmniTrax is an open-source Blender Add-on designed for deep learning-driven multi-animal tracking and pose-estimation. It leverages recent advancements in deep-learning-based detection (YOLOv3, YOLOv4) and computationally inexpensive buffer-and-recover tracking techniques. OmniTrax integrates with Blender's internal motion tracking pipeline, making it an excellent tool for annotating and analyzing large video files containing numerous freely moving subjects. Additionally, it integrates DeepLabCut-Live for marker-less pose estimation on arbitrary numbers of animals, using both the DeepLabCut Model Zoo and custom-trained detector and pose estimator networks.
OmniTrax is designed to be a plug-and-play toolkit for biologists to facilitate the extraction of kinematic and behavioural data of freely moving animals. OmniTrax can, for example, be used in population monitoring applications, especially, in changing environments where background subtraction methods may fail. This ability can be amplified by using detection models trained on highly variable synthetically generated data. OmniTrax also lends itself well to annotating training and validation data for detector & tracker neural networks, or providing instance and pose data for size classification and unsupervised behavioural clustering tasks.

Pose estimation and skeleton overlay example (trained on synthetic data)
Operating System Support
[!Important] OmniTrax runs on both Windows 10 / 11 as well as Ubuntu systems. However, the installation and CPU vs GPU inference support differs, as well as which Blender version needs to be installed to ensure compatibility of dependencies.
| Operating System | Blender Version | CPU inference | GPU inference | |:----------------------:|:---------------:|:--------------:|:-------------:| | Windows 10 / 11 | 3.3 | X | X | | Ubuntu 18.04 / 20.04 | 2.92 | X | |
Installation Guide
Requirements / Notes
- OmniTrax GPU is currently only supported on Windows 10 / 11. For Ubuntu support on CPU, use Blender version 2.92.0 and skip the steps on CUDA installation.
- download and install Blender LTS 3.3 to match dependencies. If you are planning on running inference on your CPU instead (which is considerably slower) use Blender version 2.92.0.
- As we are using tensorflow 2.7, to run inference on your GPU, you will need to install CUDA 11.2 and cudNN 8.1. Refer to this official guide for version matching and installation instructions.
- When installing the OmniTrax package, you need to run Blender in administrator mode (on Windows). Otherwise, the additional required python packages may not be installable.
Step-by-step installation
- Install Blender LTS 3.3 from the official website. Simply download blender-3.3.1-windows-x64.msi and follow the installation instructions.
[!TIP] If you are new to using blender, have a look at the official Blender docs to learn how to set up a workspace and arrange different types of editor windows.
- Install CUDA 11.2 and cudNN 8.1.0. Here, we provide a separate CUDA installation guide.
- For advanced users: If you already have a separate CUDA installation on your system, make sure to additionally install 11.2 and update your PATH environment variable. Conflicting versions may mean that OmniTrax is unable to find your GPU which may lead to unexpected crashes.
Download the latest release
of OmniTrax. No need to unzip the file! You can install it straight from the Blender > Preferences > Add-on menu in the next step.
Open Blender in administrator mode. You only need to do this once, during the installation of OmniTrax. Once everything is up and running you can open Blender normally in the future.

- Open the Blender system console to see the installation progress and display information.
[!TIP] In Ubuntu this option is missing. In order to display this type of information, you need to launch blender from the terminal directly and this terminal will display equivalent information while using blender.

- Next, open (1) Edit > (2) Preferences... and under Add-ons click on (3) Install.... Then, locate the downloaded (4) omni_trax.zip file, select it, and click on (5) Install Add-on.

- The omni_trax Add-on should now be listed. Then, enabling the Add-on will start the installation process of all required python dependencies.

The installation will take quite a while, so have a look at the System Console to see the progress. Grab a cup of coffee (or tea) in the meantime.

There may be a few warnings displayed throughout the installation process, however, as long as no errors occur, all should be good. If the installation is successful, a check mark will be displayed next to the Add-on and the console should let you know that "[...] all looks good here!". Once the installation is completed, you can launch blender with regular user-privileges.

A quick test drive (Detection & Tracking)
For a more detailed guide, refer to the Tracking and Pose-Estimation docs.
1. In Blender, with the OmniTrax Addon enabled, create a new Workspace from the VFX > Motion_Tracking tab.

2. Next, select your compute device. If you have a CUDA supported GPU (and the CUDA installation went as planned...), make sure your GPU is selected here, before running any of the inference functions, as the compute device cannot be changed at runtime. By default, assuming your computer has a one supported GPU, OmniTrax will select it as GPU_0.

3. Now it's time to load a trained YOLO network. In this example we are going to use a single class ant detector, trained on synthetically generated data. The YOLOv4 network can be downloaded here.
By clicking on the folder icon next to each cell, select the respective .cfg and .weights files. Here, we are using a network input resolution of 480 x 480. The same weights file can be used for all input resolutions.

[!IMPORTANT] OmniTrax versions 0.2.x and later no longer require .data and .names files, making their provision optional. For more info on when you would need those files, refer to the extended Tracking tutorial.
Here you only need to set the path for * .cfg * .weights
[!TIP] After setting up your workspace, consider saving your project by pressing
CTRL + SSaving your project also saves your workspace, so in the future you can use this file to begin tracking right away!
4. Next, load a video you wish to analyse from your drive by clicking on Open (see image above). In this example we are using exampleantrecording.mp4.
5. Click on RESTART Track (or TRACK to continue tracking from a specific frame in the video). If you wish to stop the tracking process early, click on the video (which will open in a separate window) and press q to terminate the process.
OmniTrax will continue to track your video until it has either reached its last frame, or the End Frame (by default 250) which can be set in the Detection (YOLO) >> Processing settings.

[!NOTE] The ideal settings for the Detector and Tracker will always depend on your footage, especially on the relative animal size and movement speed. Remember, GIGO (Garbage In Garbage Out) so ensuring your recordings are evenly-lit, free from noise, flickering, and motion blur, will go a long way to improve inference quality. Refer to the full Tracking tutorial for an in-depth explanation of each setting.*
User guides
- CUDA installation instructions
- Tutorial : Multi-Animal Tracking
- Tutorial : Single- & Multi-Animal Pose-Estimation
Trained networks and config files
We provide a number of trained YOLOv4 and DeepLabCut networks to get started with OmniTrax: trained_networks
Example Video Footage
Additionally, you can download a few of our video examples to get started with OmniTrax: example_footage
Upcoming feature additions
- add option to
exclude last N framesfrom tracking, so interpolated tracks do not influence further analysis - add
bounding box stabilisationfor YOLO detections (using moving averages for corner positions) - add option to
exit pose estimationcompletely while running inference (important when the number of tracks is large) - add a progress bar for all tasks
Updates:
- 27/01/2025 - Added release version 1.0.1 with improved package installation
- 14/03/2024 - Added release version 1.0.0 official release with the
software paper.
- 05/12/2023 - Added release version 0.3.1 improved exception handling and stability.
- 11/10/2023 - Added release version 0.3.0 minor fixes, major Ubuntu support! (well, on CPU at least)
- 02/07/2023 - Added release version 0.2.3 fixing prior issues relating to masking and yolo path handling.
- 26/03/2023 - Added release version 0.2.2 which adds support for footage masking and advanced sample export (see tutorial-tracking for details).
- 28/11/2022 - Added release version 0.2.1 with updated YOLO and DLC-live model handling to accomodate for different file structures.
- 09/11/2022 - Added release version 0.2.0 with improved DLC-live pose estimation for single and multi-animal applications.
- 02/11/2022 - Added release version 0.1.3 which includes improved tracking from previous states, faster and more robust track transfer, building skeletons from DLC config files, improved package installation and start-up checks, a few bug fixes, and GPU compatibility with the latest release of Blender LTS 3.3! For CPU-only inference, continue to use Blender 2.92.0.
- 06/10/2022 - Added release version 0.1.2 with GPU support for latest Blender LTS 3.3! For CPU-only inference, continue to use Blender 2.92.0.
- 19/02/2022 - Added release version 0.1.1! Things run a lot faster now and I have added support for devices without dedicated GPUs.
- 06/12/2021 - Added the first release version 0.1! Lots of small improvements and mammal fixes. Now, it no longer feels like a pre-release and we can all give this a try. Happy Tracking!
- 29/11/2021 - Added pre-release version 0.0.2, with DeepLabCut-Live support, tested for Blender 2.92.0 only
- 20/11/2021 - Added pre-release version 0.0.1, tested for Blender 2.92.0 only
References
When using OmniTrax and/or our other projects in your work, please make sure to cite them:
@article{Plum2024,
doi = {10.21105/joss.05549},
url = {https://doi.org/10.21105/joss.05549},
year = {2024},
publisher = {The Open Journal},
volume = {9},
number = {95},
pages = {5549},
author = {Fabian Plum},
title = {OmniTrax: A deep learning-driven multi-animal tracking and pose-estimation add-on for Blender}, journal = {Journal of Open Source Software} }
@article{Plum2023a,
title = {replicAnt: a pipeline for generating annotated images of animals in complex environments using Unreal Engine},
author = {Plum, Fabian and Bulla, René and Beck, Hendrik K and Imirzian, Natalie and Labonte, David},
doi = {10.1038/s41467-023-42898-9},
issn = {2041-1723},
journal = {Nature Communications},
url = {https://doi.org/10.1038/s41467-023-42898-9},
volume = {14},
year = {2023}
}
License
© Fabian Plum, 2023 MIT License
Owner
- Name: Fabian Plum
- Login: FabianPlum
- Kind: user
- Repositories: 2
- Profile: https://github.com/FabianPlum
JOSS Publication
OmniTrax: A deep learning-driven multi-animal tracking and pose-estimation add-on for Blender
Tags
Blender multi-object tracking pose-estimation deep learningGitHub Events
Total
- Create event: 2
- Release event: 1
- Issues event: 5
- Watch event: 8
- Issue comment event: 12
- Push event: 4
- Pull request event: 4
Last Year
- Create event: 2
- Release event: 1
- Issues event: 5
- Watch event: 8
- Issue comment event: 12
- Push event: 4
- Pull request event: 4
Committers
Last synced: 5 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Fabian Plum | f****m@w****e | 360 |
Issues and Pull Requests
Last synced: 4 months ago
All Time
- Total issues: 43
- Total pull requests: 2
- Average time to close issues: 7 days
- Average time to close pull requests: 13 days
- Total issue authors: 9
- Total pull request authors: 1
- Average comments per issue: 1.72
- Average comments per pull request: 1.0
- Merged pull requests: 2
- Bot issues: 9
- Bot pull requests: 0
Past Year
- Issues: 3
- Pull requests: 2
- Average time to close issues: 4 days
- Average time to close pull requests: 13 days
- Issue authors: 2
- Pull request authors: 1
- Average comments per issue: 0.33
- Average comments per pull request: 1.0
- Merged pull requests: 2
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- sfmig (11)
- github-actions[bot] (10)
- FabianPlum (7)
- rizarae-p (6)
- Seda145 (2)
- linab8 (2)
- NeighNeighNeigh (1)
- lucasmiranda42 (1)
- ekswathi (1)
Pull Request Authors
- FabianPlum (2)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- actions/checkout v2 composite
- actions/upload-artifact v1 composite
- openjournals/openjournals-draft-action master composite

