zff_vad
Unsupervised Voice Activity Detection by Modeling Source and System Information using Zero Frequency Filtering
Science Score: 57.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 1 DOI reference(s) in README -
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.1%) to scientific vocabulary
Keywords
audio-processing
machine-learning
noise-robust
signal-processing
speech-activity-detection
voice-activity-detection
Last synced: 6 months ago
·
JSON representation
·
Repository
Unsupervised Voice Activity Detection by Modeling Source and System Information using Zero Frequency Filtering
Basic Info
Statistics
- Stars: 19
- Watchers: 6
- Forks: 1
- Open Issues: 0
- Releases: 1
Topics
audio-processing
machine-learning
noise-robust
signal-processing
speech-activity-detection
voice-activity-detection
Created about 3 years ago
· Last pushed over 2 years ago
Metadata Files
Readme
License
Citation
README.rst
================================================================================================================
ZFF VAD
================================================================================================================
[Paper_]
[Poster_]
[Video_]
[Slides_]
|License| |OpenSource| |BlackFormat| |BanditSecurity| |iSortImports|
.. image:: img/figure.jpg
:alt: Pipeline
Unsupervised Voice Activity Detection by Modeling Source and System Information using Zero Frequency Filtering
---------------------------------------------------------------------------------------------------------------
This repository contains the code developed for the Interspeech accepted paper: `Unsupervised Voice Activity Detection by Modeling Source and System Information using Zero Frequency Filtering`__ by E. Sarkar, R. Prasad, and M. Magimai Doss (2022).
Please cite the original authors for their work in any publication(s) that uses this work:
.. code:: bib
@inproceedings{sarkar22_interspeech,
author = {Eklavya Sarkar and RaviShankar Prasad and Mathew Magimai Doss},
title = {{Unsupervised Voice Activity Detection by Modeling Source and System Information using Zero Frequency Filtering}},
year = {2022},
booktitle = {Proc. Interspeech 2022},
pages = {4626--4630},
doi = {10.21437/Interspeech.2022-10535}
}
Approach
---------
We jointly model voice source and vocal tract system information using zero-frequency filtering technique for the purpose of voice activity detection. This is computed by combining the ZFF filter outputs together to compose a composite signal carrying salient source and system information, such as the fundamental frequency :math:`$f_0$` and formants :math:`$F_1$` and :math:`$F_2$`, and then applying a dynamic threshold after spectral entropy-based weighting. Our approach operates purely in the time domain, is robust across a range of SNRs, and is much more computationally efficient than other neural methods.
Installation
------------
This package has very few requirements.
To create a new conda/mamba environment, install conda_, then mamba_ and simply follow the next steps:
.. code:: bash
mamba env create -f environment.yml # Create environment
conda activate zff # Activate environment
make install clean # Install packages
Command-line Usage
-------------------
To segment a single audio file into a .csv file:
.. code:: bash
segment -w path/to/audio.wav -o path/to/save/segments
To segment a folder of audio files:
.. code:: bash
segment -f path/to/folder/of/audio/files -o path/to/save/segments
For more options check:
.. code:: bash
segment -h
*Note*: depending on the conditions of the given data, it will be necessary tune the smoothing and theta parameters.
Python Usage
-------------
To compute VAD on a given audio file:
.. code:: python
from zff import utils
from zff.zff import zff_vad
# Read audio at native sampling rate
sr, audio = utils.load_audio("audio.wav")
# Get segments
boundary = zff_vad(audio, sr)
# Smooth
boundary = utils.smooth_decision(boundary, sr)
# Convert from sample to time domain
segments = utils.sample2time(audio, sr, boundary)
# Save as .csv file
utils.save_segments("segments", "audio", segments)
To extract the composite signal from a given audio file:
.. code:: python
from zff.zff import zff_cs
from zff import utils
# Read audio at native sampling rate
fs, audio = utils.load_audio("audio.mp3")
# Get composite signal
composite = zff_cs(audio, sr)
# Get all signals
composite, y0, y1, y2, gcis = zff_cs(audio, sr, verbose=True)
Repository Structure
-----------------------------
.. code:: bash
.
├── environment.yml # Environment
├── img # Images
├── LICENSE # License
├── Makefile # Setup
├── MANIFEST.in # Setup
├── pyproject.toml # Setup
├── README.rst # README
├── requirements.txt # Setup
├── setup.py # Setup
├── version.txt # Version
└── zff # Source code folder
├── arguments.py # Arguments parser
├── segment.py # Main method
├── utils.py # Utility methods
└── zff.py # ZFF methods
Contact
-------
For questions or reporting issues to this software package, kindly contact the first author_.
.. _author: eklavya.sarkar@idiap.ch
.. _Paper: https://www.isca-speech.org/archive/interspeech_2022/sarkar22_interspeech.html
.. _Poster: https://eklavyafcb.github.io/docs/Sarkar_Interspeech_2022_Poster_Landscape.pdf
.. _Video: https://youtu.be/hIHLu_7ESfM
.. _Slides: https://eklavyafcb.github.io/docs/Sarkar_Interspeech_2022_Presentation.pdf
.. _conda: https://conda.io
.. _mamba: https://mamba.readthedocs.io/en/latest/installation.html#existing-conda-install
__ https://www.isca-speech.org/archive/interspeech_2022/sarkar22_interspeech.html
.. |License| image:: https://img.shields.io/badge/License-GPLv3-blue.svg
:target: https://github.com/idiap/ZFF_VAD/blob/master/LICENSE
:alt: License
.. |OpenSource| image:: https://img.shields.io/badge/GitHub-Open%20source-green
:target: https://github.com/idiap/ZFF_VAD/
:alt: Open-Source
.. |BlackFormat| image:: https://img.shields.io/badge/code%20style-black-000000.svg
:target: https://github.com/psf/black
:alt: Style
.. |BanditSecurity| image:: https://img.shields.io/badge/security-bandit-yellow.svg
:target: https://github.com/PyCQA/bandit
:alt: Security
.. |iSortImports| image:: https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336
:target: https://pycqa.github.io/isort
:alt: Imports
Owner
- Name: Idiap Research Institute
- Login: idiap
- Kind: organization
- Location: Centre du Parc, Martigny, Switzerland
- Website: http://www.idiap.ch
- Repositories: 73
- Profile: https://github.com/idiap
Citation (CITATION.cff)
cff-version: 1.1.0 message: "If you use this software, please cite it as below." authors: - family-names: Sarkar given-names: Eklavya - family-names: Prasad given-names: RaviShankar - family-names: Magimai.-Doss given-names: Mathew title: Unsupervised Voice Activity Detection by Modeling Source and System Information using Zero Frequency Filtering doi: version: v0.1.0 date-released: 2023-10-16
GitHub Events
Total
- Watch event: 2
Last Year
- Watch event: 2
Issues and Pull Requests
Last synced: over 1 year ago
All Time
- Total issues: 0
- Total pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 0
- Total pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0