aerial_wildlife_detection

Tools for detecting wildlife in aerial images using active learning

https://github.com/microsoft/aerial_wildlife_detection

Science Score: 46.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, ieee.org
  • Committers with academic emails
    2 of 14 committers (14.3%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (15.5%) to scientific vocabulary

Keywords

active-learning aerial-imagery aiforearth conservation wildlife

Keywords from Contributors

cameratrap ecology megadetector camera-traps
Last synced: 6 months ago · JSON representation

Repository

Tools for detecting wildlife in aerial images using active learning

Basic Info
  • Host: GitHub
  • Owner: microsoft
  • License: mit
  • Language: Python
  • Default Branch: master
  • Homepage:
  • Size: 33.5 MB
Statistics
  • Stars: 238
  • Watchers: 23
  • Forks: 59
  • Open Issues: 28
  • Releases: 4
Topics
active-learning aerial-imagery aiforearth conservation wildlife
Created over 6 years ago · Last pushed over 1 year ago
Metadata Files
Readme License Code of conduct Security

readme.md

AIDE: Annotation Interface for Data-driven Ecology

AIDE is two things in one: a tool for manually annotating images and a tool for training and running machine (deep) learning models. Those two things are coupled in an active learning loop: the human annotates a few images, the system trains a model, that model is used to make predictions and to select more images for the human to annotate, etc.

More generally, AIDE is a modular Web framework for labeling image datasets with AI assistance. AIDE is configurable for a variety of tasks, but it is particularly intended for ecological applications, such as the acceleration wildlife surveys that use aerial images.

AIDE is primarily developed by Benjamin Kellenberger, supported by the Microsoft AI for Earth program.

Contents

Highlights

  • Powerful: AIDE explicitly integrates humans and AI models in an annotation loop.
  • Fast: AIDE has been designed with speed in mind, both in terms of computations and workflow.
  • Flexible: The framework allows full customizability, from hyperparameters to models to annotation types to libraries. It provides:
    • Support for image classification, point annotations, and bounding boxes (object detection)
    • Many deep learning-based AI models and Active Learning criteria built-in
    • Interfaces for custom AI models and criteria, using any framework or library you want (see how to write your own model).
  • Fully featured: Beyond image labeling and model training, AIDE has management and graphical user/machine performance evaluation tools built-in, right in the web browser, allowing for advanced, manual label quality checks.
  • Modular: AIDE is separated into individual modules, each of which can be run on separate machines for scalability. It even supports on-the-fly addition of computational workers for computationally intensive model training!

AIDE highlights

News

AIDE v2.1 is out

AIDE v2.1 is out! This includes a new interactive installer for Debian/Ubuntu systems as well as a plethora of bug fixes.

Older news

Demo

A demo of AIDE v2 can be accessed here.

This demo allows exploring the annotation front-end with a number of example datasets, including: * Image labels on the Snapshot Serengeti camera traps dataset * Points on the VGG Penguins dataset * Bounding boxes on the NOAA Arctic Seals aerial imagery * Semantic segmentation on the Chesapeake Land Cover satellite imagery

Installation and launching AIDE

See here.

AI models in AIDE

Built-in AI models

AIDE ships with a set of built-in models that can be configured and customized:

| Label type | AI model | Model variants / backbones | More info | |-|-|-|-| | Image labels | AlexNet | AlexNet | paper | | | DenseNet | DenseNet-161 | paper | | | MNASNet | MNASNet | paper | | | MobileNet | MobileNet V2 | paper | | | ResNet | ResNet-18; ResNet-34; ResNet-50; ResNet-101; ResNet-152 | paper | | | ResNeXt | ResNeXt-50; ResNeXt-101 | paper | | | ShuffleNet | ShuffleNet V2 | paper | | | SqueezeNet | SqueezeNet | paper | | | VGG | VGG-16 | paper | | | Wide ResNet | Wide ResNet-50; Wide ResNet-101 | info | | Bounding boxes | Faster R-CNN | with ResNet-50 (PASCAL VOC); with ResNet-50 (MS-COCO); with ResNeXt-101 FPN (MS-COCO) | paper, implementation details | | | RetinaNet | with ResNet-50 FPN (MS-COCO); with ResNet-101 FPN (MS-COCO) | paper, implementation details | | | TridentNet | with ResNet-50; ResNet-101 (MS-COCO) | paper, implementation details | Segmentation masks | DeepLabV3+ | with modified ResNet-101 (Cityscapes) | paper, implementation details |

All models can be configured in various ways through the AI model settings page in the Web browser. They all are pre-trained on ImageNet unless specified otherwise. To use one of the built-in models, simply import the requested one to your project through the Model Marketplace in the Web browser and start training/predicting!

Writing your own AI model

AIDE is fully modular and supports custom AI models, as long as they provide a Python interface and can handle at least one of the different annotation and prediction types appropriately. We greatly welcome contributions and are happy to help in the implementation of your custom models!

See here for instructions on implementing custom models into AIDE.

Publications and References

Please cite the following paper if you use AIDE in your work:

Kellenberger, Benjamin, Devis Tuia, and Dan Morris. "AIDE: Accelerating image‐based ecological surveys with interactive machine learning." Methods in Ecology and Evolution 11(12), 1716-1727. DOI: 10.1111/2041-210X.13489.

BibTeX @article{kellenberger2020aide, title={AIDE: Accelerating image-based ecological surveys with interactive machine learning}, author={Kellenberger, Benjamin and Tuia, Devis and Morris, Dan}, journal={Methods in Ecology and Evolution}, volume={11}, number={12}, pages={1716--1727}, year={2020}, publisher={Wiley Online Library} }

If you use AIDE, we would be happy to hear from you! Please send us an E-mail with a little bit of info about your use case; besides getting to know the fellow usership of our software, this also enables us to provide somewhat more tailored support for you if needed. Thank you very much.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Owner

  • Name: Microsoft
  • Login: microsoft
  • Kind: organization
  • Email: opensource@microsoft.com
  • Location: Redmond, WA

Open source projects and samples from Microsoft

GitHub Events

Total
  • Watch event: 9
  • Fork event: 2
Last Year
  • Watch event: 9
  • Fork event: 2

Committers

Last synced: 7 months ago

All Time
  • Total Commits: 744
  • Total Committers: 14
  • Avg Commits per committer: 53.143
  • Development Distribution Score (DDS): 0.536
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
bkellenb k****u@k****t 345
bkellenb p****n@p****t 159
bkellenb b****r@e****h 142
= = 47
Dan Morris d****s@c****u 18
szjarek s****k@o****m 15
amritagupta g****0@g****m 9
= b****r@w****l 2
Jaroslaw Szczegielniak j****k@o****k 2
bkellenb r****t@k****t 1
Francesco Frassinelli f****i@n****o 1
Matthew Skiffington 4****f 1
junxnone j****2@g****m 1
microsoft-github-policy-service[bot] 7****] 1

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 47
  • Total pull requests: 19
  • Average time to close issues: 23 days
  • Average time to close pull requests: 3 months
  • Total issue authors: 21
  • Total pull request authors: 12
  • Average comments per issue: 2.36
  • Average comments per pull request: 0.74
  • Merged pull requests: 6
  • Bot issues: 0
  • Bot pull requests: 2
Past Year
  • Issues: 0
  • Pull requests: 1
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 1
  • Average comments per issue: 0
  • Average comments per pull request: 0.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 1
Top Authors
Issue Authors
  • ctorney (9)
  • frafra (7)
  • mikeyEcology (3)
  • simbamangu (3)
  • AllenDun (3)
  • MattSkiff (3)
  • VLucet (2)
  • scfulford (2)
  • YangZhangMizzou (2)
  • bkellenb (2)
  • robmarkcole (1)
  • stewartmacdonald (1)
  • valentinitnelav (1)
  • bw4sz (1)
  • marireeves (1)
Pull Request Authors
  • frafra (5)
  • dependabot[bot] (2)
  • Infinite-Blue-1042 (2)
  • MattSkiff (2)
  • szjarek (2)
  • microsoft-github-policy-service[bot] (1)
  • kushalsingh-00 (1)
  • bkellenb (1)
  • junxnone (1)
  • Forchapeatl (1)
  • ctorney (1)
  • agentmorris (1)
Top Labels
Issue Labels
Pull Request Labels
dependencies (2)

Dependencies

docker/requirements.txt pypi
  • Pillow >=2.2.1
  • bcrypt >=3.1.6
  • bottle >=0.12
  • celery *
  • detectron2 *
  • gunicorn >=19.9.0
  • netifaces >=0.10.9
  • numpy *
  • opencv-python *
  • psycopg2 >=2.8.2
  • python-dateutil *
  • requests *
  • torch ==1.9.0
  • torchvision ==0.10.0
  • tqdm >=4.32.1
requirements.txt pypi
  • Pillow >=2.2.1
  • bcrypt >=3.1.6
  • bottle >=0.12
  • celery *
  • detectron2 *
  • gunicorn >=19.9.0
  • netifaces >=0.10.9
  • numpy *
  • opencv-python *
  • psycopg2-binary >=2.8.2
  • python-dateutil *
  • requests *
  • torch ==1.9.0
  • torchvision ==0.10.0
  • tqdm >=4.32.1
docker/Dockerfile docker
  • pytorch/pytorch 1.9.0-cuda11.1-cudnn8-devel build
docker/docker-compose.yml docker
  • aide_app latest
setup.py pypi