action
Automated Camera Trapping Identification and Organization Network (ACTION)
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (16.8%) to scientific vocabulary
Keywords
Repository
Automated Camera Trapping Identification and Organization Network (ACTION)
Basic Info
Statistics
- Stars: 16
- Watchers: 2
- Forks: 0
- Open Issues: 4
- Releases: 4
Topics
Metadata Files
README.md
Automated Camera Trapping Identification and Organization Network (ACTION)
Overview
ACTION is a Python-based tool designed to bring the power of AI computer vision models to camera trap video analysis. ACTION lets you process hours of raw footage into positive detection clips where animals appear. Whether you're monitoring aquatic life with underwater cameras or tracking terrestrial wildlife, ACTION can save you from the time and tedious labour of reviewing footage manually.
How it Works
ACTION takes one or more video files as input, along with several optional parameters to customize the process. Depending on the environment specified by the user, an appropriate object detection model is used: YOLO-Fish v4 for aquatic videos, or Megadetector v5 for terrestrial. Input videos are processed using the AI model, and a clip is created whenever terrestrial animals or fish are detected. At the end of the process, a filename_clips directory will include all the detections from the raw footage.
Setup
ACTION is written in Python and requires a number of dependencies and large machine learning models (~778MB) to be installed and downloaded.
The easiest way to use it is with the pixi package manager. Pixi installs everything you need into a local .pixi folder (i.e., at the root of the project), without needing to modify your system.
Installation Steps
- Download the Source code (
ziportar.gz) from the Releases page, or use Git to clone this repo usinggit clone https://github.com/humphrem/action.git - Install pixi using the instructions for your operating system. NOTE: on Windows, if using the
iwrcommand, make sure you are using PowerShell vs. cmd.exe/Terminal, or use the MSI Windows Installer. - Start a terminal and navigate to the root of the ACTION project folder you just downloaded or cloned,
cd action - Enter the command
pixi run setupto install dependencies and to download the AI models and sample videos (NOTE: the models are large, ~778M, and will take some time to download).
sh
git clone https://github.com/humphrem/action.git
cd action
pixi run setup
When setup is complete, you will have two additional directories:
video- sample video files you can use to testmodels- the ONNX models needed to do the detections
Using the Pixi Shell Environment
Each time you want to use ACTION, open a terminal, navigate to the ACTION folder, and start a shell with pixi:
sh
pixi shell
This will make all of the dependencies installed with pixi run setup available.
When you are done, you can exit the pixi shell by using:
sh
exit
Running ACTION
With all dependencies installed and the models downloaded to the models/ directory, you can now run ACTION:
```sh $ pixi shell $ python3 action.py
usage: action.py [-h] [-e {terrestrial,aquatic}] [-b BUFFER] [-c CONFIDENCE] [-m MIN_DURATION] [-f SKIP_FRAMES] [-d] [-o OUTPUT_DIR] [-s] [-i] [--log-level {DEBUG,INFO,WARNING,ERROR}] filename [filename ...] action.py: error: the following arguments are required: filename ```
[!NOTE] On Unix systems, you can also use
./action.pywithoutpython3
Options
ACTION can be configured to run in different ways using various arguments and flags.
| Option | Description | Example |
| --- | --- | --- |
| filename | Path to a video file, multiple video files, or a glob pattern. | ./video/*.mov |
| -e, --environment | Type of camera environment, either aquatic or terrestrial. Defaults to aquatic. | --environment terrestrial |
| -b, --buffer | Number of seconds to add before and after detection. Cannot be negative. Defaults to 1.0 for aquatic and 5.0 for terrestrial. | --buffer 1.0 |
| -c, --confidence | Confidence threshold for detection. Must be greater than 0.0 and less than 1.0. Defaults to 0.50. | --confidence 0.45 |
| -m, --minimum-duration | Minimum duration for clips in seconds. Must be greater than 0.0. Defaults to 3.0 for aquatic and 10.0 for terrestrial. | --minimum-duration 2.0 |
| -f, --frames-to-skip | Number of frames to skip when detecting. Cannot be negative. Defaults to half the frame rate. | --frames-to-skip 15 |
| -d, --delete-previous-clips | Whether to delete clips from previous interrupted or old runs before processing a video again. | --delete-previous-clips |
| -o, --output-dir | Output directory to use for all clips. | --output-dir ./output |
| -s, --show-detections | Whether to visually show detection frames with bounding boxes. | --show-detections |
| -i, --include-bbox-images | Whether to include the bounding box images for the frames that trigger or extend each detection event, along with the videos in the clips directory. | --include-bbox-images |
| --log-level | Logging level. Can be DEBUG, INFO, WARNING, or ERROR. Defaults to INFO. | --log-level DEBUG |
[!NOTE] The options with
-or--are optional, whilefilenameis a required argument
Examples
To process a video named recording.mov using default settings, specify only the filename:
sh
python3 action.py recording.mov
You can also include multiple filenames:
sh
python3 action.py recording1.mov recording2.mov recording3.mov
Or use a file pattern:
sh
python3 action.py ./videos/*.avi
Many other options can be altered (see above) to process videos in specific ways. For example:
sh
python3 action.py ./video/aquatic.mov -c 0.60 -m 3.0 -s -b 1.0 -d -e aquatic
This would process the file ./video/aquatic.mov, deleting clips from a previous run (e.g. to re-analyze the same video with new settings), use the YOLO-Fish model, set a confidence threshold of 0.60 (i.e. include fish detections with confidence 0.60 and higher), make all clips 3.0 seconds minimum with a 1.0 second buffer added to the start and end of the clip (i.e. 1.0 + 3.0 + 1.0 = 5.0 seconds), and visually show each initial detection using bounding boxes on the video frame.
sh
python3 action.py ./video/terrestrial.mov -c 0.45 -m 8.0 -b 2.0 -e terrestrial -f 25
This would process the file ./video/terrestrial.mov, use the Megadetector model, set a confidence threshold of 0.45 (i.e. include animal detections with confidence 0.45 and higher), make all clips 8.0 seconds minimum with a 2.0 second buffer added to the start and end of the clip (i.e. 2.0 + 8.0 + 2.0 = 12.0 seconds), and run detections on every 25th frame in the video.
Example Bounding Box Images
If either of the -s/--show-detections or -i/--include-bbox-images flags are included, bounding boxes and confidence scores are also displayed (with -s) or written to the clips directory (with -i). These can be helpful when trying to understand what triggered a detection event, or caused it to be extended.
Here are some examples of aquatic and terrestrial bounding box images.
Aquatic Examples

Terrestrial Examples

Owner
- Name: Morgan Humphrey
- Login: humphrem
- Kind: user
- Repositories: 1
- Profile: https://github.com/humphrem
Citation (CITATION.cff)
cff-version: 1.2.0
title: 'Automated Camera Trapping Identification and Organization Network (ACTION)'
message: 'If you use this software, please cite it as below.'
type: software
authors:
- family-names: Humphrey
given-names: Morgan
- family-names: Humphrey
given-names: David
repository-code: 'https://github.com/humphrem/action'
url: 'https://github.com/humphrem/action/blob/main/README.md'
repository-artifact: 'https://github.com/humphrem/action/releases'
keywords:
- Camera Trap
- Conservation
- Machine Learning
license: Apache-2.0
commit: v1.0.1
version: 1.0.1
date-released: 2023-10-30
doi: 10.5281/zenodo.10056500
GitHub Events
Total
- Watch event: 3
- Delete event: 3
- Issue comment event: 1
- Push event: 4
- Pull request event: 5
- Create event: 3
Last Year
- Watch event: 3
- Delete event: 3
- Issue comment event: 1
- Push event: 4
- Pull request event: 5
- Create event: 3
Committers
Last synced: about 2 years ago
Top Committers
| Name | Commits | |
|---|---|---|
| David Humphrey | d****y@g****m | 30 |
| humphrem | m****y@g****m | 16 |
Issues and Pull Requests
Last synced: about 2 years ago
All Time
- Total issues: 6
- Total pull requests: 18
- Average time to close issues: 1 day
- Average time to close pull requests: about 4 hours
- Total issue authors: 2
- Total pull request authors: 1
- Average comments per issue: 0.5
- Average comments per pull request: 0.17
- Merged pull requests: 18
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 6
- Pull requests: 18
- Average time to close issues: 1 day
- Average time to close pull requests: about 4 hours
- Issue authors: 2
- Pull request authors: 1
- Average comments per issue: 0.5
- Average comments per pull request: 0.17
- Merged pull requests: 18
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- humphrem (5)
- humphd (3)
Pull Request Authors
- humphd (20)