sslh

Deep Semi-Supervised Learning with Holistic methods for audio classification.

https://github.com/labbeti/sslh

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 5 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.9%) to scientific vocabulary

Keywords

audio-classification deep-learning machine-learning pytorch pytorch-lightning semi-supervised
Last synced: 6 months ago · JSON representation ·

Repository

Deep Semi-Supervised Learning with Holistic methods for audio classification.

Basic Info
Statistics
  • Stars: 10
  • Watchers: 2
  • Forks: 1
  • Open Issues: 0
  • Releases: 19
Topics
audio-classification deep-learning machine-learning pytorch pytorch-lightning semi-supervised
Created over 5 years ago · Last pushed about 1 year ago
Metadata Files
Readme Changelog License Citation

README.md

# Deep Semi-Supervised Learning with Holistic methods (SSLH) Python PyTorch Code style: black Unofficial PyTorch and PyTorch-Lightning implementations of Deep Semi-Supervised Learning methods for audio tagging.

There is 4 SSL methods : - FixMatch (FM) [1] - MixMatch (MM) [2] - ReMixMatch (RMM) [3] - Unsupervised Data Augmentation (UDA) [4]

For the following datasets : - CIFAR-10 (CIFAR10) - ESC-10 (ESC10) - Google Speech Commands (GSC) - Primate Vocalization Corpus (PVC) - UrbanSound8k (UBS8K)

With 3 models : - WideResNet28 (WRN28) - MobileNetV1 (MNV1) - MobileNetV2 (MNV2)

IMPORTANT NOTE: The implementation of Mean Teacher (MT), Deep Co-Training (DCT) and Pseudo-Labeling (PL) are present in this repository but not fully tested.

You can find a more stable version of MT and DCT at https://github.com/Labbeti/semi-supervised. The datasets AudioSet and FSD50K are not officially supported.

If you meet problems to run experiments, you can contact me at labbeti.pub@gmail.com.

Installation

Download & setup

bash git clone https://github.com/Labbeti/SSLH conda env create -n env_sslh -f environment.yaml conda activate env_sslh pip install -e SSLH --no-dependencies

Alternatives

  • As python package : bash pip install https://github.com/Labbeti/SSLH The dependencies will be automatically installed with pip instead of conda, which means the the build versions can be slightly different.

The project contains also a environment.yaml and requirements.txt for installing the packages respectively with conda or pip. - With conda environment file : bash conda env create -n env_sslh -f environment.yaml conda activate env_sslh pip install -e . --no-dependencies

  • With pip requirements file : bash pip install -r requirements.txt pip install -e . --no-dependencies

Datasets

CIFAR10, ESC10, GoogleSpeechCommands and FSD50K can be downloaded and installed. For UrbanSound8k, please read the README of leocances, in section "Prepare the dataset". AudioSet (ADS) and Primate Vocalize Corpus (PVC) cannot be installed automatically by now.

To download a dataset, you can use the data.dm.download=true option.

[comment]: <> (TODO : For Audioset install !) [comment]: <> (TODO : For PVC install !)

Usage

This code use Hydra for parsing args. The syntax of setting an argument is "name=value" instead of "--name value".

Example 1 : MixMatch on ESC10 bash python -m sslh.mixmatch data=ssl_esc10 data.dm.download=true

Example 2 : Supervised+Weak on GSC bash python -m sslh.supervised data=sup_gsc aug@train_aug=weak data.dm.bsize=256 epochs=300 data.dm.download=true

Example 3 : FixMatch+MixUp on UBS8K bash python -m sslh.fixmatch data=ssl_ubs8K pl=fixmatch_mixup data.dm.bsize_s=128 data.dm.bsize_u=128 epochs=300 data.dm.download=true

Example 4 : ReMixMatch on CIFAR-10 bash python -m sslh.remixmatch data=ssl_cifar10 model.n_input_channels=3 aug@weak_aug=img_weak aug@strong_aug=img_strong data.dm.download=true

List of main arguments

| Name | Description | Values | Default | | --- | --- | --- | --- | | data | Dataset used | (sup|ssl)(ads|cifar10|esc10|fsd50k|gsc|pvc|ubs8k) | (sup|ssl)esc10 | | pl | Pytorch Lightning training method (experiment) used | (depends of the python script, see the filenames in config/pl/ folder) | (depends of the python script) | | model | Pytorch model to use | mobilenetv1, mobilenetv2, vgg, wideresnet28 | wideresnet28 | | optim | Optimizer used | adam, sgd | adam | | sched | Learning rate scheduler | cosine, softcosine, none | softcosine | | epochs | Number of training epochs | int | 1 | | bsize | Batch size in SUP methods | int | 60 | | ratio | Ratio of the training data used in SUP methods | float in [0, 1] | 1.0 | | bsizes | Batch size of supervised part in SSL methods | int | 30 | | bsizeu | Batch size of unsupervised part in SSL methods | int | 30 | | ratios | Ratio of the supervised training data used in SSL methods | float in [0, 1] | 0.1 | | ratiou | Ratio of the unsupervised training data used in SSL methods | float in [0, 1] | 0.9 |

SSLH Package overview

sslh ├── callbacks ├── datamodules │ ├── supervised │ └── semi_supervised ├── datasets ├── pl_modules │ ├── deep_co_training │ ├── fixmatch │ ├── mean_teacher │ ├── mixmatch │ ├── mixup │ ├── pseudo_labeling │ ├── remixmatch │ ├── supervised │ └── uda ├── metrics ├── models ├── transforms │ ├── get │ ├── image │ ├── other │ ├── pools │ ├── self_transforms │ ├── spectrogram │ └── waveform └── utils

Authors

This repository has been created by Etienne Labbé (Labbeti on Github).

It contains also some code from the following authors : - Léo Cancès (leocances on github) - For AudioSet, ESC10, GSC, PVC and UBS8K datasets base code. - Qiuqiang Kong (qiuqiangkong on Github) - For MobileNetV1 & V2 model implementation from PANN.

Additional notes

  • This project has been made with Ubuntu 20.04 and Python 3.8.5.

Glossary

| Acronym | Description | | --- | --- | | activation | Activation function | | ADS | AudioSet | | aug, augm, augment | Augmentation | | ce | Cross-Entropy | | expt | Experiment | | fm | FixMatch | | fn, func | Function | | GSC | Google Speech Commands dataset (with 35 classes) | | GSC12 | Google Speech Commands dataset (with 10 classes from GSC, 1 unknown class and 1 silence class) | | hparams | Hyperparameters | | js | Jensen-Shannon | | kl | Kullback-Leibler | | loc | Localisation | | lr | Learning Rate | | mm | MixMatch | | mse | Mean Squared Error | | pred | Prediction | | PVC | Primate Vocalize Corpus dataset | | rmm | ReMixMatch | | _s | Supervised | | sched | Scheduler | | SSL | Semi-Supervised Learning | | SUP | Supervised Learning | | _u | Unsupervised | | UBS8K | UrbanSound8K dataset |

References

[1] K. Sohn, D. Berthelot, C.-L. Li, Z. Zhang, N. Carlini, E. D. Cubuk, A. Ku- rakin, H. Zhang, and C. Raffel, “FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence,” p. 21.

[2] D. Berthelot, N. Carlini, I. Goodfellow, N. Papernot, A. Oliver, and C. Raffel, “MixMatch: A Holistic Approach to Semi-Supervised Learning,” Oct. 2019, number: arXiv:1905.02249 arXiv:1905.02249 [cs, stat]. [Online]. Available: http://arxiv.org/abs/1905.02249

[3] D. Berthelot, N. Carlini, E. D. Cubuk, A. Kurakin, K. Sohn, H. Zhang, and C. Raffel, “ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring,” Feb. 2020, number: arXiv:1911.09785 arXiv:1911.09785 [cs, stat]. [Online]. Available: http://arxiv.org/abs/1911.09785

[4] Q. Xie, Z. Dai, E. Hovy, M.-T. Luong, and Q. V. Le, “Unsu- pervised Data Augmentation for Consistency Training,” Nov. 2020, number: arXiv:1904.12848 arXiv:1904.12848 [cs, stat]. [Online]. Available: http://arxiv.org/abs/1904.12848

Cite this repository

If you use this code, you can cite the following paper associated : @article{cances_comparison_2022, title = {Comparison of semi-supervised deep learning algorithms for audio classification}, author = {Cances, Léo and Labbé, Etienne and Pellegrini, Thomas}, year = 2022, month = sep, journal = {EURASIP Journal on Audio, Speech, and Music Processing}, volume = 2022, number = 1, pages = 23, doi = {10.1186/s13636-022-00255-6}, issn = {1687-4722}, url = {https://doi.org/10.1186/s13636-022-00255-6}, abstract = {In this article, we adapted five recent SSL methods to the task of audio classification. The first two methods, namely Deep Co-Training (DCT) and Mean Teacher (MT), involve two collaborative neural networks. The three other algorithms, called MixMatch (MM), ReMixMatch (RMM), and FixMatch (FM), are single-model methods that rely primarily on data augmentation strategies. Using the Wide-ResNet-28-2 architecture in all our experiments, 10\% of labeled data and the remaining 90\% as unlabeled data for training, we first compare the error rates of the five methods on three standard benchmark audio datasets: Environmental Sound Classification (ESC-10), UrbanSound8K (UBS8K), and Google Speech Commands (GSC). In all but one cases, MM, RMM, and FM outperformed MT and DCT significantly, MM and RMM being the best methods in most experiments. On UBS8K and GSC, MM achieved 18.02\% and 3.25\% error rate (ER), respectively, outperforming models trained with 100\% of the available labeled data, which reached 23.29\% and 4.94\%, respectively. RMM achieved the best results on ESC-10 (12.00\% ER), followed by FM which reached 13.33\%. Second, we explored adding the mixup augmentation, used in MM and RMM, to DCT, MT, and FM. In almost all cases, mixup brought consistent gains. For instance, on GSC, FM reached 4.44\% and 3.31\% ER without and with mixup. Our PyTorch code will be made available upon paper acceptance at https://github.com/Labbeti/SSLH.} }

Contact

Maintainer: - Etienne Labbé "Labbeti": labbeti.pub@gmail.com

Owner

  • Name: Labbeti
  • Login: Labbeti
  • Kind: user
  • Location: Toulouse, France
  • Company: IRIT

PhD student at IRIT (Institut de Recherche en Informatique de Toulouse), working mainly on Automated Audio Captioning.

Citation (CITATION.cff)

# -*- coding: utf-8 -*-

cff-version: 1.2.0
message: If you use this software, please cite it as below.
title: SSLH
authors:
  - given-names: Etienne
    family-names: Labbé
    affiliation: IRIT
    orcid: 'https://orcid.org/0000-0002-7219-5463'
url: https://github.com/Labbeti/SSLH

preferred-citation:
  authors:
    - family-names: Cances
      given-names: Léo
    - family-names: Labbé
      given-names: Etienne
      affiliation: IRIT
      orcid: 'https://orcid.org/0000-0002-7219-5463'
    - family-names: Pellegrini
      given-names: Thomas
      affiliation: IRIT
      orcid: 'https://orcid.org/0000-0001-8984-1399'
  doi: "10.1186/s13636-022-00255-6"
  end: 23
  issue: 1
  journal: "EURASIP Journal on Audio, Speech, and Music Processing"
  start: 1
  month: 9
  title: "Comparison of semi-supervised deep learning algorithms for audio classification"
  type: newspaper-article
  url: "https://doi.org/10.1186/s13636-022-00255-6"
  volume: 2022
  year: 2022

GitHub Events

Total
  • Release event: 1
  • Watch event: 1
  • Delete event: 1
  • Create event: 2
Last Year
  • Release event: 1
  • Watch event: 1
  • Delete event: 1
  • Create event: 2

Committers

Last synced: about 1 year ago

All Time
  • Total Commits: 40
  • Total Committers: 1
  • Avg Commits per committer: 40.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 1
  • Committers: 1
  • Avg Commits per committer: 1.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Labbeti e****1@g****m 40

Issues and Pull Requests

Last synced: 10 months ago

All Time
  • Total issues: 1
  • Total pull requests: 2
  • Average time to close issues: about 1 hour
  • Average time to close pull requests: 5 months
  • Total issue authors: 1
  • Total pull request authors: 1
  • Average comments per issue: 1.0
  • Average comments per pull request: 0.5
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • topel (1)
Pull Request Authors
  • topel (2)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

requirements.txt pypi
  • advertorch ==0.2.3
  • black ==21.12b0
  • click ==8.0.4
  • h5py ==3.6.0
  • hydra-colorlog ==1.2.0
  • hydra-core ==1.1.2
  • librosa ==0.9.1
  • matplotlib ==3.5.2
  • numpy ==1.22.4
  • pandas ==1.4.2
  • pytorch-lightning ==1.2.10
  • pyyaml ==6.0
  • soundfile ==0.10.3.post1
  • tensorboard ==2.9.0
  • torch ==1.7.1
  • torchaudio ==0.7.2
  • torchvision ==0.8.2
  • tqdm ==4.64.0
setup.py pypi
.github/workflows/test.yaml actions
  • actions/checkout v4 composite
  • actions/setup-python v5 composite
environment.yaml pypi
  • advertorch ==0.2.3
  • hydra-colorlog ==1.2.0
  • hydra-core ==1.1.2