project-classify-fish-sounds

An archive repo with collection of data, scripts, notebooks, and a model to detect fish sounds from spectrograms.

https://github.com/axiom-data-science/project-classify-fish-sounds

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.1%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

An archive repo with collection of data, scripts, notebooks, and a model to detect fish sounds from spectrograms.

Basic Info
  • Host: GitHub
  • Owner: axiom-data-science
  • License: mit
  • Language: Python
  • Default Branch: main
  • Size: 22 MB
Statistics
  • Stars: 3
  • Watchers: 4
  • Forks: 1
  • Open Issues: 0
  • Releases: 1
Created over 3 years ago · Last pushed about 2 years ago
Metadata Files
Readme License Citation Zenodo

README.md

Fish Sound Detector

DOI

This repo is a collection of data, a Python library to work with Raven annotation files and generate spectrograms, and a model to detect the vocalization of fish sounds from spectrograms generated from hydrophones as part of a collaboration between Mote Marine Laboratory & Aquarium, Southeast Coastal Ocean Observing Regional Association, and Axiom Data Science.

Components

Data

Included in the repo is a training set of spectrograms created from an annotated dataset of fish vocalizations labeled by domain experts and volunteers. The provided annotation files are located in data/acoustic-data-annotations which were summarized in data/acoustic-data-annotations/mote-samples.csv. See data/README.md for more information.

Helper scripts / library

A Python package composed of various helpful scripts was created to reorganize and standardize the provided raw data and to generate training sets from which detector models could be trained. The package can be installed via pip, e.g.

bash src/acoustic-tools> pip install -e .

Notebooks

A Jupyter notebook train-resetnet101-fastai.ipynb is included which demonstrates how to train a neural network (ResNet101 implemented in fast.ai in the example) to detect fish sounds using the provided labeled data. The model has an accuracy of ~0.875 when training for 25 epochs using the selected subset of annotated data that both undersamples classes with many examples and oversamples classess with few examples.

Models

The model created in the notebook train-resetnet101-fastai.ipynb is saved in models and available from Huggingface Hub (models/axds/classify-fish-sounds).

Demo

A running demo of the model is available on Huggingface Spaces (src/classify-fish-sounds).

Owner

  • Name: Axiom Data Science
  • Login: axiom-data-science
  • Kind: organization
  • Location: United States

Citation (CITATION.cff)

cff-version: 1.1.0
message: "If you use this software, please cite it as below."
authors:
  - family-names: Lopez
    given-names: Jesse
    orcid: https://orcid.org/0000-0002-6450-6209
title: axiom-data-science/project-classify-fish-sounds
date-released: 2022-06-24

GitHub Events

Total
  • Watch event: 1
Last Year
  • Watch event: 1

Dependencies

src/acoustic-tools/requirements.txt pypi
  • click *
  • fastai *
  • librosa *
  • matplotlib *
  • pandas *
  • pydub *
  • torchaudio *
src/Dockerfile docker
  • debian bullseye-slim build
src/acoustic-tools/setup.py pypi