cameratrapai

AI models trained by Google to classify species in images from motion-triggered wildlife cameras.

https://github.com/google/cameratrapai

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 2 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.2%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

AI models trained by Google to classify species in images from motion-triggered wildlife cameras.

Basic Info
  • Host: GitHub
  • Owner: google
  • License: apache-2.0
  • Language: Jupyter Notebook
  • Default Branch: main
  • Homepage:
  • Size: 11.5 MB
Statistics
  • Stars: 334
  • Watchers: 7
  • Forks: 39
  • Open Issues: 3
  • Releases: 0
Created over 1 year ago · Last pushed 7 months ago
Metadata Files
Readme Contributing License Citation

README.md

SpeciesNet

An ensemble of AI models for classifying wildlife in camera trap images.

Table of Contents

Overview

Effective wildlife monitoring relies heavily on motion-triggered wildlife cameras, or “camera traps”, which generate vast quantities of image data. Manual processing of these images is a significant bottleneck. AI can accelerate that processing, helping conservation practitioners spend more time on conservation, and less time reviewing images.

This repository hosts code for running an ensemble of two AI models: (1) an object detector that finds objects of interest in wildlife camera images, and (2) an image classifier that classifies those objects to the species level. This ensemble is used for species recognition in the Wildlife Insights platform.

The object detector used in this ensemble is MegaDetector, which finds animals, humans, and vehicles in camera trap images, but does not classify animals to species level.

The species classifier (SpeciesNet) was trained at Google using a large dataset of camera trap images and an EfficientNet V2 M architecture. It is designed to classify images into one of more than 2000 labels, covering diverse animal species, higher-level taxa (like "mammalia" or "felidae"), and non-animal classes ("blank", "vehicle"). SpeciesNet has been trained on a geographically diverse dataset of over 65M images, including curated images from the Wildlife Insights user community, as well as images from publicly-available repositories.

The SpeciesNet ensemble combines these two models using a set of heuristics and, optionally, geographic information to assign each image to a single category. See the "ensemble decision-making" section for more information about how the ensemble combines information for each image to make a single prediction.

The full details of the models and the ensemble process are discussed in this research paper:

Gadot T, Istrate Ș, Kim H, Morris D, Beery S, Birch T, Ahumada J. To crop or not to crop: Comparing whole-image and cropped classification on a large dataset of camera trap images. IET Computer Vision. 2024 Dec;18(8):1193-208.

Running SpeciesNet

Setting up your Python environment

The instructions on this page will assume that you have a Python virtual environment set up. If you have not installed Python, or you are not familiar with Python virtual environments, start with our installing Python page. If you see a prompt that looks something like the following, you're all set to proceed to the next step:

speciesnet conda prompt

Installing the SpeciesNet Python package

You can install the SpeciesNet Python package via:

pip install speciesnet

If you are on a Mac, and you receive an error during this step, add the "--use-pep517" option, like this:

pip install speciesnet --use-pep517

To confirm that the package has been installed, you can run:

python -m speciesnet.scripts.run_model --help

You should see help text related to the main script you'll use to run SpeciesNet.

Running the models

The easiest way to run the ensemble is via the "run_model" script, like this:

python -m speciesnet.scripts.run_model --folders "c:\your\image\folder" --predictions_json "c:\your\output\file.json"

Change c:\your\image\folder to the root folder where your images live, and change c:\your\output\file.json to the location where you want to put the output file containing the SpeciesNet results.

This will automatically download and run the detector and the classifier. This command periodically logs output to the output file, and if this command doesn't finish (e.g. you have to cancel or reboot), you can just run the same command, and it will pick up where it left off.

These commands produce an output file in .json format; for details about this format, and information about converting it to other formats, see the "output format" section below.

You can also run the three steps (detector, classifier, ensemble) separately; see the "running each component separately" section for more information.

In the above example, we didn't tell the ensemble what part of the world your images came from, so it may, for example, predict a kangaroo for an image from England. If you want to let our ensemble filter predictions geographically, add, for example:

--country GBR

You can use any ISO 3166-1 alpha-3 three-letter country code.

If your images are from the USA, you can also specify a state name using the two-letter state abbreviation, by adding, for example:

--admin1_region CA

Using GPUs

If you don't have an NVIDIA GPU, you can ignore this section.

If you have an NVIDIA GPU, SpeciesNet should use it. If SpeciesNet is using your GPU, when you start run_model, in the output, you will see something like this:

Loaded SpeciesNetClassifier in 0.96 seconds on CUDA.
Loaded SpeciesNetDetector in 0.7 seconds on CUDA

"CUDA" is good news, that means "GPU".

If SpeciesNet is not using your GPU, you will see something like this instead:

Loaded SpeciesNetClassifier in 9.45 seconds on CPU
Loaded SpeciesNetDetector in 0.57 seconds on CPU

You can also directly check whether SpeciesNet can see your GPU by running:

python -m speciesnet.scripts.gpu_test

99% of the time, after you install SpeciesNet on Linux, it will correctly see your GPU right away. On Windows, you will likely need to take at least one more step:

  1. Install the GPU version of PyTorch, by activating your speciesnet Python environment (e.g. by running "conda activate speciesnet"), then running:

pip install torch torchvision --upgrade --force-reinstall --index-url https://download.pytorch.org/whl/cu118

  1. If the GPU doesn't work immediately after that step, update your GPU driver, then reboot. Really, don't skip the reboot part, most problems related to GPU access can be fixed by upgrading your driver and rebooting.

Running each component separately

Rather than running everything at once, you may want to run the detection, classification, and ensemble steps separately. You can do that like this:

  • Run the detector:

python -m speciesnet.scripts.run_model --detector_only --folders "c:\your\image\folder" --predictions_json "c:\your_detector_output_file.json"

  • Run the classifier, passing the file that you just created, which contains detection results:

python -m speciesnet.scripts.run_model --classifier_only --folders "c:\your\image\folder" --predictions_json "c:\your_clasifier_output_file.json" --detections_json "c:\your_detector_output_file.json"

  • Run the ensemble step, passing both the files that you just created, which contain the detection and classification results:

python -m speciesnet.scripts.run_model --ensemble_only --folders "c:\your\image\folder" --predictions_json "c:\your_ensemble_output_file.json" --detections_json "c:\your_detector_output_file.json" --classifications_json "c:\your_clasifier_output_file.json" --country CAN

Note that in this example, we have specified the country code only for the ensemble step; the geofencing is part of the ensemble component, so the country code is only relevant for this step.

Downloading SpeciesNet model weights directly

The run_model.py script recommended above will download model weights automatically. If you want to use the SpeciesNet model weights outside of our script, or if you plan to be offline when you first run the script, you can download model weights directly from Kaggle. Running our ensemble also requires MegaDetector, so in this list of links, we also include a direct link to the MegaDetector model weights.

Contacting us

If you have issues or questions, either file an issue or email us at cameratraps@google.com.

Citing SpeciesNet

If you use this model, please cite:

text @article{gadot2024crop, title={To crop or not to crop: Comparing whole-image and cropped classification on a large dataset of camera trap images}, author={Gadot, Tomer and Istrate, Ștefan and Kim, Hyungwon and Morris, Dan and Beery, Sara and Birch, Tanya and Ahumada, Jorge}, journal={IET Computer Vision}, year={2024}, publisher={Wiley Online Library} }

Alternative installation variants

Depending on how you plan to run SpeciesNet, you may want to install additional dependencies:

  • Minimal requirements:

pip install speciesnet

  • Minimal + notebook requirements:

pip install speciesnet[notebooks]

  • Minimal + server requirements:

pip install speciesnet[server]

  • Minimal + cloud requirements (az / gs / s3), e.g.:

pip install speciesnet[gs]

  • Any combination of the above requirements, e.g.:

pip install speciesnet[notebooks,server]

Supported models

There are two variants of the SpeciesNet classifier, which lend themselves to different ensemble strategies:

  • v4.0.1a (default): Always-crop model, i.e. we run the detector first and crop the image to the top detection bounding box before feeding it to the species classifier.
  • v4.0.1b: Full-image model, i.e. we run both the detector and the species classifier on the full image, independently.

run_model.py defaults to v4.0.1a, but you can specify one model or the other using the --model option, for example:

  • --model kaggle:google/speciesnet/pyTorch/v4.0.1a
  • --model kaggle:google/speciesnet/pyTorch/v4.0.1b

If you are a DIY type and you plan to run the models outside of our ensemble, a couple of notes:

  • The crop classifier (v4.0.1a) expects images to be cropped tightly to animals, then resized to 480x480px.
  • The whole-image classifier (v4.0.1b) expects images to have been cropped vertically to remove some pixels from the top and bottom, then resized to 480x480px.

See classifier.py to see how preprocessing is implemented for both classifiers.

Input format

In the above examples, we demonstrate calling run_model.py using the --folders option to point to your images, and optionally using the --country options to tell the ensemble what country your images came from. run_model.py can also load a list of images from a .json file in the following format; this is particularly useful if you want to specify different countries/states for different subsets of your images.

When you call the model, you can either prepare your requests to match this format or, in some cases, other supported formats will be converted to this automatically.

text { "instances": [ { "filepath": str => Image filepath "country": str (optional) => 3-letter country code (ISO 3166-1 Alpha-3) for the location where the image was taken "admin1_region": str (optional) => First-level administrative division (in ISO 3166-2 format) within the country above "latitude": float (optional) => Latitude where the image was taken "longitude": float (optional) => Longitude where the image was taken }, ... => A request can contain multiple instances in the format above. ] }

admin1region is currently only supported in the US, where valid values for admin1region are two-letter state codes.

Latitude and longitude are only used to determine admin1_region, so if you are specifying a state code, you don't need to specify latitude and longitude.

Output format

run_model.py produces output in .json format, containing an array called "predictions", with one element per image. We provide a script to convert this format to the format used by MegaDetector, which can be imported into Timelapse, see speciesnettomd.py.

Each element always contains field called "filepath"; the exact content of those elements will vary depending on which elements of the ensemble you ran.

Full ensemble

In the full ensemble output, the "classifications" field contains raw classifier output, before geofencing is applied. So even if you specify a country code, you may see taxa in the "classifications" field that are not found in the country you specified. The "prediction" field is the result of integrating the classification, detection, and geofencing information; if you specify a country code, the "prediction" field should only contain taxa that are found in the country you specified.

text { "predictions": [ { "filepath": str => Image filepath. "failures": list[str] (optional) => List of internal components that failed during prediction (e.g. "CLASSIFIER", "DETECTOR", "GEOLOCATION"). If absent, the prediction was successful. "country": str (optional) => 3-letter country code (ISO 3166-1 Alpha-3) for the location where the image was taken. It can be overwritten if the country from the request doesn't match the country of (latitude, longitude). "admin1_region": str (optional) => First-level administrative division (in ISO 3166-2 format) within the country above. If not provided in the request, it can be computed from (latitude, longitude) when those coordinates are specified. Included in the response only for some countries that are used in geofencing (e.g. "USA"). "latitude": float (optional) => Latitude where the image was taken, included only if (latitude, longitude) were present in the request. "longitude": float (optional) => Longitude where the image was taken, included only if (latitude, longitude) were present in the request. "classifications": { => dict (optional) => Top-5 classifications. Included only if "CLASSIFIER" if not part of the "failures" field. "classes": list[str] => List of top-5 classes predicted by the classifier, matching the decreasing order of their scores below. "scores": list[float] => List of scores corresponding to top-5 classes predicted by the classifier, in decreasing order. "target_classes": list[str] (optional) => List of target classes, only present if target classes are passed as arguments. "target_logits": list[float] (optional) => Raw confidence scores (logits) of the target classes, only present if target classes are passed as arguments. }, "detections": [ => list (optional) => List of detections with confidence scores > 0.01, in decreasing order of their scores. Included only if "DETECTOR" if not part of the "failures" field. { "category": str => Detection class "1" (= animal), "2" (= human) or "3" (= vehicle) from MegaDetector's raw output. "label": str => Detection class "animal", "human" or "vehicle", matching the "category" field above. Added for readability purposes. "conf": float => Confidence score of the current detection. "bbox": list[float] => Bounding box coordinates, in (xmin, ymin, width, height) format, of the current detection. Coordinates are normalized to the [0.0, 1.0] range, relative to the image dimensions. }, ... => A prediction can contain zero or multiple detections. ], "prediction": str (optional) => Final prediction of the SpeciesNet ensemble. Included only if "CLASSIFIER" and "DETECTOR" are not part of the "failures" field. "prediction_score": float (optional) => Final prediction score of the SpeciesNet ensemble. Included only if the "prediction" field above is included. "prediction_source": str (optional) => Internal component that produced the final prediction. Used to collect information about which parts of the SpeciesNet ensemble fired. Included only if the "prediction" field above is included. "model_version": str => A string representing the version of the model that produced the current prediction. }, ... => A response will contain one prediction for each instance in the request. ] }

Classifier-only inference

text { "predictions": [ { "filepath": str => Image filepath. "failures": list[str] (optional) => List of internal components that failed during prediction (in this case, only "CLASSIFIER" can be in that list). If absent, the prediction was successful. "classifications": { => dict (optional) => Top-5 classifications. Included only if "CLASSIFIER" if not part of the "failures" field. "classes": list[str] => List of top-5 classes predicted by the classifier, matching the decreasing order of their scores below. "scores": list[float] => List of scores corresponding to top-5 classes predicted by the classifier, in decreasing order. "target_classes": list[str] (optional) => List of target classes, only present if target classes are passed as arguments. "target_logits": list[float] (optional) => Raw confidence scores (logits) of the target classes, only present if target classes are passed as arguments. } }, ... => A response will contain one prediction for each instance in the request. ] }

Detector-only inference

text { "predictions": [ { "filepath": str => Image filepath. "failures": list[str] (optional) => List of internal components that failed during prediction (in this case, only "DETECTOR" can be in that list). If absent, the prediction was successful. "detections": [ => list (optional) => List of detections with confidence scores > 0.01, in decreasing order of their scores. Included only if "DETECTOR" if not part of the "failures" field. { "category": str => Detection class "1" (= animal), "2" (= human) or "3" (= vehicle) from MegaDetector's raw output. "label": str => Detection class "animal", "human" or "vehicle", matching the "category" field above. Added for readability purposes. "conf": float => Confidence score of the current detection. "bbox": list[float] => Bounding box coordinates, in (xmin, ymin, width, height) format, of the current detection. Coordinates are normalized to the [0.0, 1.0] range, relative to the image dimensions. }, ... => A prediction can contain zero or multiple detections. ] }, ... => A response will contain one prediction for each instance in the request. ] }

Visualizing SpeciesNet output

As per above, many users will work with SpeciesNet results in open-source tools like Timelapse, which support the file format used by MegaDetector (the format is described here). Consequently, we provide a speciesnettomd script to convert from the SpeciesNet output format to this format.

However, if you want to use the command line or Python code to visualize SpeciesNet results, we recommend using the visualization tools provided in the megadetector-utils Python package. For example, if you just ran SpeciesNet on some images like this:

bash IMAGE_DIR=/path/to/your/images python -m speciesnet.scripts.run_model --folders ${IMAGE_DIR} --predictions_json ${IMAGE_DIR}/speciesnet-results.json

You can use the visualizedetectoroutput script from the megadetector-utils package, like this:

bash PREVIEW_DIR=/wherever/you/want/the/output pip install megadetector-utils python -m megadetector.visualization.visualize_detector_output ${IMAGE_DIR}/speciesnet-results.json ${PREVIEW_DIR}

That will produce a folder of images with SpeciesNet results visualized on each image. A typical use of this script would also use the --sample argument (to render a random subset of images, if what you want is to quickly grok how SpeciesNet did on a large dataset), and often the --htmloutputfile argument, to wrap the results in an HTML page that makes it quick to scroll through them. Putting those together will give you pages like these:

To see all the options, run:

bash python -m megadetector.visualization.visualize_detector_output --help

The other relevant script is postprocessbatchresults, which also renders sample images, but instead of just putting them in a flat folder, the purpose of this script is to allow you to quickly see samples of detections/non-detections, and to quickly see samples broken out by species. So, for example, you can do:

bash python -m megadetector.postprocessing.postprocess_batch_results ${IMAGE_DIR}/speciesnet-results.json ${PREVIEW_DIR}

...to get pages like these:

To see all the options, run:

bash python -m megadetector.postprocessing.postprocess_batch_results --help

Both of these modules can also be called from Python code instead of from the command line.

Ensemble decision-making

The SpeciesNet ensemble uses multiple steps to predict a single category for each image, combining the strengths of the detector and the classifier.

The ensembling strategy was primarily optimized for minimizing the human effort required to review collections of images. To do that, the guiding principles are:

  • Help users to quickly filter out unwanted images (e.g., blanks): identify as many blank images as possible while minimizing missed animals, which can be more costly than misclassifying a non-blank image as one of the possible animal classes.
  • Provide high-confidence predictions for frequent classes (e.g., deer).
  • Make predictions on the lowest taxonomic level possible, while balancing precision: if the ensemble is not confident enough all the way to the species level, we would rather return a prediction we are confident about in a higher taxonomic level (e.g., family, or sometimes even "animal"), instead of risking an incorrect prediction on the species level.

Here is a breakdown of the different steps:

  1. Input processing: Raw images are preprocessed and passed to both the object detector (MegaDetector) and the image classifier. The type of preprocessing will depend on the selected model. For "always crop" models, images are first processed by the object detector and then cropped based on the detection bounding box before being fed to the classifier. For "full image" models, images are preprocessed independently for both models.

  2. Object detection: The detector identifies potential objects (animals, humans, or vehicles) in the image, providing their bounding box coordinates and confidence scores.

  3. Species classification: The species classifier analyzes the (potentially cropped) image to identify the most likely species present. It provides a list of top-5 species classifications, each with a confidence score. The species classifier is a fully supervised model that classifies images into a fixed set of animal species, higher taxa, and non-animal labels.

  4. Detection-based human/vehicle decisions: If the detector is highly confident about the presence of a human or vehicle, that label will be returned as the final prediction regardless of what the classifier predicts. If the detection is less confident and the classifier also returns human or vehicle as a top-5 prediction, with a reasonable score, that top prediction will be returned. This step prevents high-confidence detector predictions from being overridden by lower-confidence classifier predictions.

  5. Blank decisions: If the classifier predicts "blank" with a high confidence score, and the detector has very low confidence about the presence of an animal (or is absent), that "blank" label is returned as a final prediction. Similarly, if a classification is "blank" with extra-high confidence (above 0.99), that label is returned as a final prediction regardless of the detector's output. This enables the model to filter out images with high confidence in being blank.

  6. Geofencing: If the most likely species is an animal and a location (country and optional admin1 region) is provided for the image, a geofencing rule is applied. If that species is explicitly disallowed for that region based on the available geofencing rules, the prediction will be rolled up (as explained below) to a higher taxa level on that allow list.

  7. Label rollup: If all of the previous steps do not yield a final prediction, a "rollup" is applied when there is a good classification score for an animal. "Rollup" is the process of propagating the classification predictions to the first matching ancestor in the taxonomy, provided there is a good score at that level. This means the model may assign classifications at the genus, family, order, class, or kingdom level, if those scores are higher than the score at the species level. This is a common strategy to handle long-tail distributions, common in wildlife datasets.

  8. Detection-based animal decisions: If the detector has a reasonable confidence animal prediction, animal will be returned along with the detector confidence.

  9. Unknown: If no other rule applies, the unknown class is returned as the final prediction, to avoid making low-confidence predictions.

  10. Prediction source: At each step of the prediction workflow, a prediction_source is stored. This will be included in the final results to help diagnose which parts of the overall SpeciesNet ensemble were actually used.

Contributing code

If you're interested in contributing to our repo, rather than installing via pip, we recommend cloning the repo, then creating the Python virtual environment for development using the following commands:

bash python -m venv .env source .env/bin/activate pip install -e .[dev]

We use the following tools for testing and validating code:

  • pytest for running tests:

    bash pytest -vv

  • black for formatting code:

    bash black .

  • isort for sorting Python imports consistently:

    bash isort .

  • pylint for linting Python code and flag various issues:

    bash pylint . --recursive=yes

  • pyright for static type checking:

    bash pyright

  • pymarkdown for linting Markdown files:

    bash pymarkdown scan **/*.md

If you submit a PR to contribute your code back to this repo, you will be asked to sign a contributor license agreement; see CONTRIBUTING.md for more information.

Animal picture

It would be unfortunate if this whole README about camera trap images didn't show you a single camera trap image, so...

giant armadillo

Image credit University of Minnesota, from the Orinoquía Camera Traps dataset.

Build status

Python tests Python style checks Markdown style checks

Owner

  • Name: Google
  • Login: google
  • Kind: organization
  • Email: opensource@google.com
  • Location: United States of America

Google ❤️ Open Source

Citation (citation.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it using the metadata in this file."
authors:
  - family-names: Gadot
    given-names: Tomer
  - family-names: Istrate
    given-names: Ștefan
  - family-names: Kim
    given-names: Hyungwon
  - family-names: Morris
    given-names: Dan
  - family-names: Beery
    given-names: Sara
  - family-names: Birch
    given-names: Tanya
  - family-names: Ahumada
    given-names: Jorge
title: "To crop or not to crop: Comparing whole-image and cropped classification on a large dataset of camera trap images"
version: "1.0.0"
date-released: "2024-11-24"
publisher: "Wiley Online Library"
journal: "IET Computer Vision"
volume: "18"
issue: "8"
pages: "1193--1208"
year: "2024"
doi: "10.1049/cvi2.12318" 
type: software
keywords:
  - Camera traps
  - Conservation
  - Computer vision

GitHub Events

Total
  • Create event: 17
  • Issues event: 38
  • Watch event: 277
  • Delete event: 13
  • Member event: 2
  • Issue comment event: 93
  • Push event: 84
  • Public event: 1
  • Pull request review comment event: 6
  • Pull request review event: 6
  • Pull request event: 22
  • Fork event: 31
Last Year
  • Create event: 17
  • Issues event: 38
  • Watch event: 277
  • Delete event: 13
  • Member event: 2
  • Issue comment event: 93
  • Push event: 84
  • Public event: 1
  • Pull request review comment event: 6
  • Pull request review event: 6
  • Pull request event: 22
  • Fork event: 31

Committers

Last synced: 7 months ago

All Time
  • Total Commits: 129
  • Total Committers: 7
  • Avg Commits per committer: 18.429
  • Development Distribution Score (DDS): 0.512
Past Year
  • Commits: 129
  • Committers: 7
  • Avg Commits per committer: 18.429
  • Development Distribution Score (DDS): 0.512
Top Committers
Name Email Commits
Ștefan Istrate s****e@g****m 63
Dan Morris a****s@g****m 53
Tomer Gadot t****g@g****m 4
Tanya Birch 4****h 4
Val. Lucet V****t 2
Timm Haucke t****m@h****z 2
CharlesCNorton 1****n 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 7 months ago

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 1,123 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 7
  • Total maintainers: 4
pypi.org: speciesnet

Tools for classifying species in images from motion-triggered wildlife cameras.

  • Homepage: https://github.com/google/cameratrapai
  • Documentation: https://speciesnet.readthedocs.io/
  • License: Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
  • Latest release: 5.0.1
    published 7 months ago
  • Versions: 7
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 1,123 Last month
Rankings
Dependent packages count: 9.8%
Average: 32.4%
Dependent repos count: 54.9%
Last synced: 7 months ago

Dependencies

.github/workflows/markdown_style_checks.yml actions
  • actions/checkout v4 composite
  • actions/setup-python v5 composite
.github/workflows/python_style_checks.yml actions
  • actions/checkout v4 composite
  • actions/setup-python v5 composite
.github/workflows/python_tests.yml actions
  • actions/checkout v4 composite
  • actions/setup-python v5 composite
pyproject.toml pypi
  • absl-py *
  • cloudpathlib *
  • huggingface_hub *
  • humanfriendly *
  • kagglehub *
  • matplotlib *
  • numpy *
  • pandas *
  • pillow *
  • requests *
  • reverse_geocoder *
  • tensorflow >= 2.12, < 2.16 ; sys_platform != 'darwin' or platform_machine != 'arm64'
  • tensorflow-macos >= 2.12, < 2.15 ; sys_platform == 'darwin' and platform_machine == 'arm64'
  • tensorflow-metal sys_platform == 'darwin' and platform_machine == 'arm64'
  • torch >= 2.0
  • tqdm *
  • yolov5 >= 7.0.8, < 7.0.12