opensoundscape

Open source, scalable software for the analysis of bioacoustic recordings

https://github.com/kitzeslab/opensoundscape

Science Score: 49.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 2 DOI reference(s) in README
  • Academic publication links
  • Committers with academic emails
    6 of 17 committers (35.3%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (16.5%) to scientific vocabulary
Last synced: 8 months ago · JSON representation

Repository

Open source, scalable software for the analysis of bioacoustic recordings

Basic Info
  • Host: GitHub
  • Owner: kitzeslab
  • License: mit
  • Language: Python
  • Default Branch: master
  • Homepage: http://opensoundscape.org
  • Size: 456 MB
Statistics
  • Stars: 178
  • Watchers: 8
  • Forks: 23
  • Open Issues: 107
  • Releases: 16
Created over 7 years ago · Last pushed 8 months ago
Metadata Files
Readme License

README.md

OpenSoundscape

CI Status Documentation Status

OpenSoundscape (OPSO) is free and open source Python utility library analyzing bioacoustic data.

OpenSoundscape includes utilities which can be strung together to create data analysis pipelines, including functions to:

  • load and manipulate audio files
  • create and manipulate spectrograms
  • train deep learning models to recognize sounds
  • run pre-trained CNNs to detect vocalizations
  • tune pre-trained CNNs to custom classification tasks
  • detect periodic vocalizations with RIBBIT
  • load and manipulate Raven annotations
  • estimate the location of sound sources from synchronized recordings

OpenSoundscape's documentation can be found on OpenSoundscape.org.

Show me the code!

For examples of how to use OpenSoundscape, see the Quick Start Guide below.

For full API documentation and tutorials on how to use OpenSoundscape to work with audio and spectrograms, train machine learning models, apply trained machine learning models to acoustic data, and detect periodic vocalizations using RIBBIT, see the documentation.

Contact & Citation

OpenSoundcape is developed and maintained by the Kitzes Lab at the University of Pittsburgh. It is currently in active development. If you find a bug, please submit an issue on the GitHub repository. If you have another question about OpenSoundscape, please use the (OpenSoundscape Discussions board)[https://github.com/kitzeslab/opensoundscape/discussions] or email Sam Lapp (sam.lapp at pitt.edu)

Suggested citation:

Lapp, Sam; Rhinehart, Tessa; Freeland-Haynes, Louis; 
Khilnani, Jatin; Syunkova, Alexandra; Kitzes, Justin. 
“OpenSoundscape: An Open-Source Bioacoustics Analysis Package for Python.” 
Methods in Ecology and Evolution 2023. https://doi.org/10.1111/2041-210X.14196.

Quick Start Guide

A guide to the most commonly used features of OpenSoundscape.

Installation

Details about installation are available on the OpenSoundscape documentation at OpenSoundscape.org. FAQs:

How do I install OpenSoundscape?

  • Most users should install OpenSoundscape via pip, preferably within a virtual environment: pip install opensoundscape==0.12.1.
  • To use OpenSoundscape in Jupyter Notebooks (e.g. for tutorials), follow the installation instructions for your operating system, then follow the "Jupyter" instructions.
  • Contributors and advanced users can also use Poetry to install OpenSoundscape using the "Contributor" instructions

Will OpenSoundscape work on my machine?

  • OpenSoundscape can be installed on Windows, Mac, and Linux machines.
  • For Windows users, we strongly recommend using WSL2 which facilitates happy coding
  • We support Python 3.10, 3.11, 3.12, and 3.13 (but current github runners only test on Python 3.13)
  • Most computer cluster users should follow the Linux installation instructions
  • For older Macs (Intel chip), use this workaround since newer PyTorch versions are not found by pip (replace NAME with the desired name of your enviornment):

conda create -n NAME python=3.11 conda activate NAME conda install pytorch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 -c conda-forge pip install opensoundscape==0.12.1

Use Audio and Spectrogram classes to inspect audio data

```python from opensoundscape import Audio, Spectrogram

load an audio file and trim out a 5 second clip

myaudio = Audio.fromfile("/path/to/audio.wav") clip5s = myaudio.trim(0,5)

create a spectrogram and plot it

myspec = Spectrogram.fromaudio(clip5s) myspec.plot() ```

Load audio starting at a real-world timestamp

```python from datetime import datetime; import pytz

starttime = pytz.timezone('UTC').localize(datetime(2020,4,4,10,25)) audiolength = 5 #seconds
path = '/path/to/audiomoth_file.WAV' #an AudioMoth recording

Audio.fromfile(path, starttimestamp=starttime,duration=audiolength) ```

Load and use a model from the Bioacoustics Model Zoo

The Bioacoustics Model Zoo hosts models in a repository that can be installed as a package and are compatible with OpenSoundscape. To install, use pip install bioacoustics-model-zoo==0.12.0

Load up a model and apply it to your own audio right away:

```python import bioacousticsmodelzoo as bmz

list available models

print(bmz.utils.list_models())

generate class predictions and embedding vectors with Perch

perch = bmz.Perch() scores = perch.predict(files) embeddings = perch.generate_embeddings(files)

...or BirdNET

birdnet = bmz.BirdNET() scores = birdnet.predict(files) embeddings = birdnet.generate_embeddings(files) ```

See the tutorial notebooks for examples of training and fine-tuning models from the model zoo with your own annotations.

Load a pre-trained CNN from a local file, and make predictions on long audio files

```python from opensoundscape import load_model

get list of audio files

files = glob('./dir/*.WAV')

generate predictions with a model

model = load_model('/path/to/saved.model') scores = model.predict(files)

scores is a dataframe with MultiIndex: file, starttime, endtime

containing inference scores for each class and each audio window

```

Train a CNN using audio files and Raven annotations

```python from sklearn.modelselection import traintest_split from opensoundscape import BoxedAnnotations, CNN

assume we have a list of raven annotation files and corresponding audio files

load the annotations into OpenSoundscape

allannotations = BoxedAnnotations.fromravenfiles(ravenfilepaths,audiofile_paths)

pick classes to train the model on. These should occur in the annotated data

class_list = ['IBWO','BLJA']

create labels for fixed-duration (2 second) clips

labels = allannotations.cliplabels( clipduration=2, clipoverlap=0, minlabeloverlap=0.25, classsubset=classlist )

split the labels into training and validation sets

traindf, validationdf = traintestsplit(labels, test_size=0.3)

create a CNN and train on the labeled data

model = CNN(architecture='resnet18', sampleduration=2, classes=classlist)

train the model to recognize the classes of interest in audio data

model.train(traindf, validationdf, epochs=20, numworkers=8, batchsize=256) ```

Train a custom classifier on BirdNET or Perch embeddings

Make sure you've installed the model zoo in your Python environment:

pip install bioacoustics-model-zoo==0.12.0

```python import bioacousticsmodelzoo as bmz

load a model from the model zoo

model = bmz.BirdNET() #or bmz.Perch()

define classes for your custom classifier

model.changeclasses(traindf.columns)

fit the trainable PyTorch classifier on your labels

model.train(traindf,valdf,numaugmentationvariants=4,batch_size=64)

run inference using your custom classifier on audio data

model.predict(audio_files)

save and load customized models

model.save(savepath) reloadedmodel = bmz.BirdNET.load(save_path) ```

Owner

  • Name: kitzeslab
  • Login: kitzeslab
  • Kind: organization

GitHub Events

Total
  • Create event: 15
  • Release event: 2
  • Issues event: 80
  • Watch event: 45
  • Delete event: 11
  • Issue comment event: 53
  • Push event: 66
  • Pull request review comment event: 6
  • Pull request review event: 2
  • Pull request event: 30
  • Fork event: 4
Last Year
  • Create event: 15
  • Release event: 2
  • Issues event: 80
  • Watch event: 45
  • Delete event: 11
  • Issue comment event: 53
  • Push event: 66
  • Pull request review comment event: 6
  • Pull request review event: 2
  • Pull request event: 30
  • Fork event: 4

Committers

Last synced: 8 months ago

All Time
  • Total Commits: 1,922
  • Total Committers: 17
  • Avg Commits per committer: 113.059
  • Development Distribution Score (DDS): 0.416
Past Year
  • Commits: 213
  • Committers: 3
  • Avg Commits per committer: 71.0
  • Development Distribution Score (DDS): 0.08
Top Committers
Name Email Commits
sammlapp s****p@g****m 1,123
rhine3 t****t@g****m 353
Louis Freeland-Haynes 6****h 144
Barry Moore c****l@g****m 104
Santiago Ruiz Guzman s****1@p****u 50
syunkova s****a@g****m 33
Lapp s****1@r****u 31
Jatin Khilnani j****3@n****u 30
Justin Kitzes j****s@p****u 17
LeonardoViotti l****i@g****m 17
Alexandra Syunkova s****h@A****l 6
Zohar j****2@p****u 4
Jatin Khilnani jk@n****l 3
Freeland-Haynes L****9@F****l 2
sar541 s****z@g****m 2
Lapp S****1@d****t 2
ter38 t****8@l****u 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 8 months ago

All Time
  • Total issues: 337
  • Total pull requests: 193
  • Average time to close issues: 7 months
  • Average time to close pull requests: 29 days
  • Total issue authors: 26
  • Total pull request authors: 10
  • Average comments per issue: 1.18
  • Average comments per pull request: 0.27
  • Merged pull requests: 125
  • Bot issues: 0
  • Bot pull requests: 40
Past Year
  • Issues: 68
  • Pull requests: 40
  • Average time to close issues: about 1 month
  • Average time to close pull requests: 8 days
  • Issue authors: 13
  • Pull request authors: 4
  • Average comments per issue: 0.34
  • Average comments per pull request: 0.15
  • Merged pull requests: 15
  • Bot issues: 0
  • Bot pull requests: 16
Top Authors
Issue Authors
  • sammlapp (225)
  • louisfh (55)
  • rhine3 (17)
  • syunkova (8)
  • paulpeyret-biophonia (5)
  • jatinkhilnani (3)
  • lmc150 (3)
  • Maxime-Bru (2)
  • lydiakatsis (2)
  • fascimare (1)
  • w-out (1)
  • jhuus (1)
  • smholmes3 (1)
  • Mgallimore88 (1)
  • AdamVarley30 (1)
Pull Request Authors
  • sammlapp (104)
  • dependabot[bot] (40)
  • louisfh (22)
  • syunkova (8)
  • LeonardoViotti (6)
  • sanruizguz (6)
  • rhine3 (3)
  • bmford (2)
  • jkitzes (1)
  • indranil1 (1)
Top Labels
Issue Labels
resolved_in_develop (149) feature request (54) bug (46) docs (31) module:ml (22) module:localization (19) discuss (16) high priority (15) module:preprocessing (14) module:annotation (10) deprecation (9) resolved_in_branch (7) dependencies (6) performance (5) in_progress (4) rename (4) testing (3) module:wandb (2) system:mps (2) good first issue (1) blocked (1) wontfix (1)
Pull Request Labels
dependencies (40) python (16) resolved_in_develop (9)

Packages

  • Total packages: 1
  • Total downloads: unknown
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 23
proxy.golang.org: github.com/kitzeslab/opensoundscape
  • Versions: 23
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 5.4%
Average: 5.6%
Dependent repos count: 5.8%
Last synced: 8 months ago

Dependencies

docs/requirements.txt pypi
  • docutils <0.18
  • ipykernel *
  • m2r *
  • nbsphinx *
  • sphinx >=1.4
.github/workflows/docker.yml actions
  • actions/checkout v2 composite
.github/workflows/poetry.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
Dockerfile docker
  • python 3.7-slim build