Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
✓Committers with academic emails
2 of 2 committers (100.0%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (5.7%) to scientific vocabulary
Keywords
Repository
Differentiable dynamic range controller in PyTorch.
Basic Info
Statistics
- Stars: 51
- Watchers: 1
- Forks: 2
- Open Issues: 0
- Releases: 3
Topics
Metadata Files
README.md
TorchComp
Differentiable dynamic range controller in PyTorch.
Installation
bash
pip install torchcomp
Compressor/Expander gain function
This function calculates the gain reduction $g[n]$ for a compressor/expander. It takes the RMS of the input signal $x[n]$ and the compressor/expander parameters as input. The function returns the gain $g[n]$ in linear scale. To use it as a regular compressor/expander, multiply the result $g[n]$ with the signal $x[n]$.
Function signature
```python
def compexpgain(
xrms: torch.Tensor,
compthresh: Union[torch.Tensor, float],
compratio: Union[torch.Tensor, float],
expthresh: Union[torch.Tensor, float],
expratio: Union[torch.Tensor, float],
at: Union[torch.Tensor, float],
rt: Union[torch.Tensor, float],
) -> torch.Tensor:
"""Compressor-Expander gain function.
Args:
x_rms (torch.Tensor): Input signal RMS.
comp_thresh (torch.Tensor): Compressor threshold in dB.
comp_ratio (torch.Tensor): Compressor ratio.
exp_thresh (torch.Tensor): Expander threshold in dB.
exp_ratio (torch.Tensor): Expander ratio.
at (torch.Tensor): Attack time.
rt (torch.Tensor): Release time.
Shape:
- x_rms: :math:`(B, T)` where :math:`B` is the batch size and :math:`T` is the number of samples.
- comp_thresh: :math:`(B,)` or a scalar.
- comp_ratio: :math:`(B,)` or a scalar.
- exp_thresh: :math:`(B,)` or a scalar.
- exp_ratio: :math:`(B,)` or a scalar.
- at: :math:`(B,)` or a scalar.
- rt: :math:`(B,)` or a scalar.
"""
```
Note:
x_rms should be non-negative.
You can calculate it using $\sqrt{x^2[n]}$ and smooth it with avg.
Equations
$$ x{\rm log}[n] = 20 \log{10} x_{\rm rms}[n] $$
$$ g{\rm log}[n] = \min\left(0, \left(1 - \frac{1}{CR}\right)\left(CT - x{\rm log}[n]\right), \left(1 - \frac{1}{ER}\right)\left(ET - x_{\rm log}[n]\right)\right) $$
$$ g[n] = 10^{g_{\rm log}[n] / 20} $$
$$ \hat{g}[n] = \begin{rcases} \begin{dcases} \alpha{\rm at} g[n] + (1 - \alpha{\rm at}) \hat{g}[n-1] & \text{if } g[n] < \hat{g}[n-1] \ \alpha{\rm rt} g[n] + (1 - \alpha{\rm rt}) \hat{g}[n-1] & \text{otherwise} \end{dcases}\end{rcases} $$
Block diagram
```mermaid graph TB input((x)) output((g)) amp2db[amp2db] db2amp[db2amp] min[Min] delay[z^-1] zero( 0 )
input --> amp2db --> neg["*(-1)"] --> plusCT["+CT"] & plusET["+ET"]
plusCT --> multCS["*(1 - 1/CR)"]
plusET --> multES["*(1 - 1/ER)"]
zero & multCS & multES --> min --> db2amp
db2amp & delay --> ifelse{<}
output --> delay --> multATT["*(1 - AT)"] & multRTT["*(1 - RT)"]
subgraph Compressor
ifelse -->|yes| multAT["*AT"]
subgraph Attack
multAT & multATT --> plus1(("+"))
end
ifelse -->|no| multRT["*RT"]
subgraph Release
multRT & multRTT --> plus2(("+"))
end
end
plus1 & plus2 --> output
```
Limiter gain function
This function calculates the gain reduction $g[n]$ for a limiter. To use it as a regular limiter, multiply the result $g[n]$ with the input signal $x[n]$.
Function signature
```python def limiter_gain( x: torch.Tensor, threshold: torch.Tensor, at: torch.Tensor, rt: torch.Tensor, ) -> torch.Tensor: """Limiter gain function. This implementation use the same attack and release time for level detection and gain smoothing.
Args:
x (torch.Tensor): Input signal.
threshold (torch.Tensor): Limiter threshold in dB.
at (torch.Tensor): Attack time.
rt (torch.Tensor): Release time.
Shape:
- x: :math:`(B, T)` where :math:`B` is the batch size and :math:`T` is the number of samples.
- threshold: :math:`(B,)` or a scalar.
- at: :math:`(B,)` or a scalar.
- rt: :math:`(B,)` or a scalar.
"""
```
Equations
$$ x{\rm peak}[n] = \begin{rcases} \begin{dcases} \alpha{\rm at} |x[n]| + (1 - \alpha{\rm at}) x{\rm peak}[n-1] & \text{if } |x[n]| > x{\rm peak}[n-1] \ \alpha{\rm rt} |x[n]| + (1 - \alpha{\rm rt}) x{\rm peak}[n-1] & \text{otherwise} \end{dcases}\end{rcases} $$
$$ g[n] = \min(1, \frac{10^\frac{T}{20}}{x_{\rm peak}[n]}) $$
$$ \hat{g}[n] = \begin{rcases} \begin{dcases} \alpha{\rm at} g[n] + (1 - \alpha{\rm at}) \hat{g}[n-1] & \text{if } g[n] < \hat{g}[n-1] \ \alpha{\rm rt} g[n] + (1 - \alpha{\rm rt}) \hat{g}[n-1] & \text{otherwise} \end{dcases}\end{rcases} $$
Block diagram
```mermaid graph TB input((x)) output((g)) peak((x_peak)) abs[abs] delay[z^-1] zero( 0 )
ifelse1{>}
ifelse2{<}
input --> abs --> ifelse1
subgraph Peak detector
ifelse1 -->|yes| multAT["*AT"]
subgraph at1 [Attack]
multAT & multATT --> plus1(("+"))
end
ifelse1 -->|no| multRT["*RT"]
subgraph rt1 [Release]
multRT & multRTT --> plus2(("+"))
end
end
plus1 & plus2 --> peak
peak --> delay --> multATT["*(1 - AT)"] & multRTT["*(1 - RT)"] & ifelse1
peak --> amp2db[amp2db] --> neg["*(-1)"] --> plusT["+T"]
zero & plusT --> min[Min] --> db2amp[db2amp] --> ifelse2{<}
subgraph gain smoothing
ifelse2 -->|yes| multAT2["*AT"]
subgraph at2 [Attack]
multAT2 & multATT2 --> plus3(("+"))
end
ifelse2 -->|no| multRT2["*RT"]
subgraph rt2 [Release]
multRT2 & multRTT2 --> plus4(("+"))
end
end
output --> delay2[z^-1] --> multATT2["*(1 - AT)"] & multRTT2["*(1 - RT)"] & ifelse2
plus3 & plus4 --> output
```
Average filter
Function signature
```python def avg(rms: torch.Tensor, avg_coef: Union[torch.Tensor, float]): """Compute the running average of a signal.
Args:
rms (torch.Tensor): Input signal.
avg_coef (torch.Tensor): Coefficient for the average RMS.
Shape:
- rms: :math:`(B, T)` where :math:`B` is the batch size and :math:`T` is the number of samples.
- avg_coef: :math:`(B,)` or a scalar.
"""
```
Equations
math
\hat{x}_{\rm rms}[n] = \alpha_{\rm avg} x_{\rm rms}[n] + (1 - \alpha_{\rm avg}) \hat{x}_{\rm rms}[n-1]
TODO
- [x] CUDA acceleration in Numba
- [ ] PyTorch CPP extension
- [ ] Native CUDA extension
- [x] Forward mode autograd
- [ ] Examples
Citation
If you find this repository useful in your research, please cite our work with the following BibTex entry:
bibtex
@inproceedings{ycy2024diffapf,
title={Differentiable All-pole Filters for Time-varying Audio Systems},
author={Chin-Yun Yu and Christopher Mitcheltree and Alistair Carson and Stefan Bilbao and Joshua D. Reiss and György Fazekas},
booktitle={International Conference on Digital Audio Effects (DAFx)},
year={2024},
pages={345--352},
}
Owner
- Name: DiffAPF
- Login: DiffAPF
- Kind: organization
- Location: United Kingdom
- Website: https://diffapf.github.io/web/
- Repositories: 5
- Profile: https://github.com/DiffAPF
Supplementary materials for the paper "Differentiable All-pole Filters for Time-varying Audio Systems".
Citation (CITATION.cff)
cff-version: 1.2.0
title: "TorchComp: fast, efficient, and differentiable dynamic range control in PyTorch"
message: "If you use this software, please cite it using the metadata from this file."
authors:
- family-names: Yu
given-names: Chin-Yun
orcid: "https://orcid.org/0000-0003-3782-2713"
URL: "https://github.com/yoyololicon/torchcomp"
keywords:
- differentiable DSP
- dynamic range control
license: MIT
version: 0.1
date-released: 2024-04-11
preferred-citation:
type: generic
title: "Differentiable All-pole Filters for Time-varying Audio Systems"
authors:
- given-names: Chin-Yun
family-names: Yu
email: chin-yun.yu@qmul.ac.uk
affiliation: Queen Mary University of London
orcid: 'https://orcid.org/0000-0003-3782-2713'
- given-names: Christopher
family-names: Mitcheltree
email: c.mitcheltree@qmul.ac.uk
affiliation: Queen Mary University of London
- given-names: Alistair
family-names: Carson
email: alistair.carson@ed.ac.uk
affiliation: University of Edinburgh
- given-names: Stefan
family-names: Bilbao
email: sbilbao@ed.ac.uk
affiliation: University of Edinburgh
- given-names: Joshua D.
family-names: Reiss
email: joshua.reiss@qmul.ac.uk
affiliation: Queen Mary University of London
- given-names: György
family-names: Fazekas
email: george.fazekas@qmul.ac.uk
affiliation: Queen Mary University of London
status: preprint
month: 4
year: 2024
identifiers:
- type: other
value: "arXiv:2404.07970"
description: The ArXiv preprint of the paper
# repository-code: "https://github.com/DiffAPF"
url: "https://diffapf.github.io/web/"
GitHub Events
Total
- Watch event: 7
- Delete event: 1
- Push event: 2
- Pull request event: 2
- Create event: 1
Last Year
- Watch event: 7
- Delete event: 1
- Push event: 2
- Pull request event: 2
- Create event: 1
Committers
Last synced: over 1 year ago
Top Committers
| Name | Commits | |
|---|---|---|
| Chin-Yun Yu | c****u@q****k | 53 |
| Chin-Yun Yu | c****6@b****k | 5 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 8 months ago
All Time
- Total issues: 0
- Total pull requests: 1
- Average time to close issues: N/A
- Average time to close pull requests: 1 minute
- Total issue authors: 0
- Total pull request authors: 1
- Average comments per issue: 0
- Average comments per pull request: 0.0
- Merged pull requests: 1
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 1
- Average time to close issues: N/A
- Average time to close pull requests: 1 minute
- Issue authors: 0
- Pull request authors: 1
- Average comments per issue: 0
- Average comments per pull request: 0.0
- Merged pull requests: 1
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- yoyolicoris (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- actions/checkout v3 composite
- actions/setup-python v3 composite
- numba *
- numpy *
- torch *
- torchaudio *
- torchlpc *
- torch *
- actions/checkout v3 composite
- actions/setup-python v3 composite
- pypa/gh-action-pypi-publish 27b31702a0e7fc50959f5ad993c78deac1bdfc29 composite