torchlpc

Fast and differentiable time domain all-pole filter in PyTorch.

https://github.com/diffapf/torchlpc

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.9%) to scientific vocabulary

Keywords

ddsp linear-predictive-coding speech-synthesis time-varying-filter time-varying-systems
Last synced: 6 months ago · JSON representation ·

Repository

Fast and differentiable time domain all-pole filter in PyTorch.

Basic Info
  • Host: GitHub
  • Owner: DiffAPF
  • License: mit
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 101 KB
Statistics
  • Stars: 65
  • Watchers: 4
  • Forks: 5
  • Open Issues: 6
  • Releases: 1
Topics
ddsp linear-predictive-coding speech-synthesis time-varying-filter time-varying-systems
Created over 2 years ago · Last pushed 7 months ago
Metadata Files
Readme License Citation

README.md

TorchLPC

PyPI version

torchlpc provides a PyTorch implementation of the Linear Predictive Coding (LPC) filter, also known as all-pole filter. It's fast, differentiable, and supports batched inputs with time-varying filter coefficients.

Given an input signal $\mathbf{x} \in \mathbb{R}^T$ and time-varying LPC coefficients $\mathbf{A} \in \mathbb{R}^{T \times N}$ with an order of $N$, the LPC filter is defined as:

$$ yt = xt - \sum{i=1}^N A{t,i} y_{t-i}. $$

Usage

```python

import torch from torchlpc import samplewiselpc

Create a batch of 10 signals, each with 100 time steps

x = torch.randn(10, 100)

Create a batch of 10 sets of LPC coefficients, each with 100 time steps and an order of 3

A = torch.randn(10, 100, 3)

Apply LPC filtering

y = samplewiselpc(x, A)

Optionally, you can provide initial values for the output signal (default is 0)

zi = torch.randn(10, 3) y = samplewiselpc(x, A, zi=zi)

Return the delay values similar to scipy.signal.lfilter

y, zf = samplewiselpc(x, A, zi=zi, return_zf=True) ```

Installation

bash pip install torchlpc

or from source

bash pip install git+https://github.com/DiffAPF/torchlpc.git

If you want to run it on NVIDIA GPU, make sure you have CUDA toolkit installed, with a verion compatible with your PyTorch installation.

MacOS

To compile with OpenMP support on MacOS, you need to install libomp via Homebrew. Also, use llvm@15 as the C++ compiler to ensure compatibility with OpenMP.

bash brew install libomp export CXX=$(brew --prefix llvm@15)/bin/clang++ export LDFLAGS="-L/usr/local/opt/libomp/lib" export CPPFLAGS="-I/usr/local/opt/libomp/include"

After performing the above steps, you can install torchlpc as usual.

Derivation of the gradients of the LPC filter

The details of the derivation can be found in our preprints[^1][^2]. We show that, given the instataneous gradient $\frac{\partial \mathcal{L}}{\partial y_t}$ where $\mathcal{L}$ is the loss function, the gradients of the LPC filter with respect to the input signal $\bf x$ and the filter coefficients $\bf A$ can be expresssed also through a time-varying filter:

math \frac{\partial \mathcal{L}}{\partial x_t} = \frac{\partial \mathcal{L}}{\partial y_t} - \sum_{i=1}^{N} A_{t+i,i} \frac{\partial \mathcal{L}}{\partial x_{t+i}}

$$ \frac{\partial \mathcal{L}}{\partial \bf A} = -\begin{vmatrix} \frac{\partial \mathcal{L}}{\partial x1} & 0 & \dots & 0 \ 0 & \frac{\partial \mathcal{L}}{\partial x2} & \dots & 0 \ \vdots & \vdots & \ddots & \vdots \ 0 & 0 & \dots & \frac{\partial \mathcal{L}}{\partial xt} \end{vmatrix} \begin{vmatrix} y0 & y{-1} & \dots & y{-N + 1} \ y1 & y0 & \dots & y{-N + 2} \ \vdots & \vdots & \ddots & \vdots \ y{T-1} & y{T - 2} & \dots & y{T - N} \end{vmatrix}. $$

Gradients for the initial condition $y_t|_{t \leq 0}$

The initial conditions provide an entry point at $t=1$ for filtering, as we cannot evaluate $t=-\infty$. Let us assume $A_{t, :}|_{t \leq 0} = 0$ so $y_t|_{t \leq 0} = x_t|_{t \leq 0}$, which also means $\frac{\partial \mathcal{L}}{\partial y_t}|_{t \leq 0} = \frac{\partial \mathcal{L}}{\partial x_t}|_{t \leq 0}$. Thus, the initial condition gradients are

$$ \frac{\partial \mathcal{L}}{\partial yt} = \frac{\partial \mathcal{L}}{\partial xt} = -\sum{i=1-t}^{N} A{t+i,i} \frac{\partial \mathcal{L}}{\partial x_{t+i}} \quad \text{for } -N < t \leq 0. $$

In practice, we pad $N$ and $N \times N$ zeros to the beginning of $\frac{\partial \mathcal{L}}{\partial \bf y}$ and $\mathbf{A}$ before evaluating $\frac{\partial \mathcal{L}}{\partial \bf x}$. The first $N$ outputs are the gradients to $y_t|_{t \leq 0}$ and the rest are to $x_t|_{t > 0}$.

Time-invariant filtering

In the time-invariant setting, $A_{t, i} = A_{1, i} \forall t \in [1, T]$ and the filter is simplified to

math y_t = x_t - \sum_{i=1}^N a_i y_{t-i}, \mathbf{a} = A_{1,:}.

The gradients $\frac{\partial \mathcal{L}}{\partial \mathbf{x}}$ are filtering $\frac{\partial \mathcal{L}}{\partial \mathbf{y}}$ with $\mathbf{a}$ backwards in time, same as in the time-varying case. $\frac{\partial \mathcal{L}}{\partial \mathbf{a}}$ is simply doing a vector-matrix multiplication:

$$ \frac{\partial \mathcal{L}}{\partial \mathbf{a}^T} = -\frac{\partial \mathcal{L}}{\partial \mathbf{x}^T} \begin{vmatrix} y0 & y{-1} & \dots & y{-N + 1} \ y1 & y0 & \dots & y{-N + 2} \ \vdots & \vdots & \ddots & \vdots \ y{T-1} & y{T - 2} & \dots & y_{T - N} \end{vmatrix}. $$

This algorithm is more efficient than [^3] because it only needs one pass of filtering to get the two gradients while the latter needs two.

[^1]: Differentiable All-pole Filters for Time-varying Audio Systems. [^2]: Differentiable Time-Varying Linear Prediction in the Context of End-to-End Analysis-by-Synthesis. [^3]: Singing Voice Synthesis Using Differentiable LPC and Glottal-Flow-Inspired Wavetables.

TODO

  • [x] Use PyTorch C++ extension for faster computation.
  • [x] Use native CUDA kernels for GPU computation.
  • [ ] Support Metal for MacOS.
  • [ ] Add examples.

Related Projects

  • torchcomp: differentiable compressors that use torchlpc for differentiable backpropagation.
  • jaxpole: equivalent implementation in JAX by @rodrigodzf.

Citation

If you find this repository useful in your research, please cite our work with the following BibTex entries:

```bibtex @inproceedings{ycy2024diffapf, title={Differentiable All-pole Filters for Time-varying Audio Systems}, author={Chin-Yun Yu and Christopher Mitcheltree and Alistair Carson and Stefan Bilbao and Joshua D. Reiss and György Fazekas}, booktitle={International Conference on Digital Audio Effects (DAFx)}, year={2024}, pages={345--352}, }

@inproceedings{ycy2024golf, title = {Differentiable Time-Varying Linear Prediction in the Context of End-to-End Analysis-by-Synthesis}, author = {Chin-Yun Yu and György Fazekas}, year = {2024}, booktitle = {Proc. Interspeech}, pages = {1820--1824}, doi = {10.21437/Interspeech.2024-1187}, } ```

Owner

  • Name: DiffAPF
  • Login: DiffAPF
  • Kind: organization
  • Location: United Kingdom

Supplementary materials for the paper "Differentiable All-pole Filters for Time-varying Audio Systems".

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - family-names: "Yu"
    given-names: "Chin-Yun"
    orcid: "https://orcid.org/0000-0003-3782-2713"
title: "TorchLPC: fast, efficient, and differentiable time-varying LPC filtering in PyTorch"
version: 0.3.1
date-released: 2023-07-09
url: "https://github.com/DiffAPF/torchlpc"
keywords:
  - differentiable DSP
  - all-pole filters
  - linear prediction
license: MIT
preferred-citation:
  type: generic
  title: "Differentiable All-pole Filters for Time-varying Audio Systems"
  authors:
  - given-names: Chin-Yun
    family-names: Yu
    email: chin-yun.yu@qmul.ac.uk
    affiliation: Queen Mary University of London
    orcid: 'https://orcid.org/0000-0003-3782-2713'
  - given-names: Christopher
    family-names: Mitcheltree
    email: c.mitcheltree@qmul.ac.uk
    affiliation: Queen Mary University of London
  - given-names: Alistair
    family-names: Carson
    email: alistair.carson@ed.ac.uk
    affiliation: University of Edinburgh
  - given-names: Stefan
    family-names: Bilbao
    email: sbilbao@ed.ac.uk
    affiliation: University of Edinburgh
  - given-names: Joshua D.
    family-names: Reiss
    email: joshua.reiss@qmul.ac.uk
    affiliation: Queen Mary University of London
  - given-names: György
    family-names: Fazekas
    email: george.fazekas@qmul.ac.uk
    affiliation: Queen Mary University of London
  status: preprint
  month: 4
  year: 2024
  identifiers:
    - type: other
      value: "arXiv:2404.07970"
      description: The ArXiv preprint of the paper
  url: "https://diffapf.github.io/web/"

GitHub Events

Total
  • Create event: 10
  • Release event: 1
  • Issues event: 10
  • Watch event: 10
  • Delete event: 8
  • Issue comment event: 10
  • Push event: 52
  • Pull request review event: 22
  • Pull request review comment event: 21
  • Pull request event: 14
  • Fork event: 2
Last Year
  • Create event: 10
  • Release event: 1
  • Issues event: 10
  • Watch event: 10
  • Delete event: 8
  • Issue comment event: 10
  • Push event: 52
  • Pull request review event: 22
  • Pull request review comment event: 21
  • Pull request event: 14
  • Fork event: 2

Committers

Last synced: 10 months ago

All Time
  • Total Commits: 83
  • Total Committers: 1
  • Avg Commits per committer: 83.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 15
  • Committers: 1
  • Avg Commits per committer: 15.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Chin-Yun Yu y****1@g****m 83

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 7
  • Total pull requests: 14
  • Average time to close issues: about 2 months
  • Average time to close pull requests: 27 days
  • Total issue authors: 3
  • Total pull request authors: 2
  • Average comments per issue: 0.43
  • Average comments per pull request: 0.43
  • Merged pull requests: 6
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 6
  • Pull requests: 13
  • Average time to close issues: about 2 months
  • Average time to close pull requests: 1 day
  • Issue authors: 3
  • Pull request authors: 1
  • Average comments per issue: 0.33
  • Average comments per pull request: 0.46
  • Merged pull requests: 6
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • yoyololicon (4)
  • philgzl (2)
  • ybourdin (1)
Pull Request Authors
  • yoyolicoris (13)
  • christhetree (1)
Top Labels
Issue Labels
bug (1)
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 506 last-month
  • Total dependent packages: 2
  • Total dependent repositories: 1
  • Total versions: 11
  • Total maintainers: 1
pypi.org: torchlpc

Fast, efficient, and differentiable time-varying LPC filtering in PyTorch.

  • Versions: 11
  • Dependent Packages: 2
  • Dependent Repositories: 1
  • Downloads: 506 Last month
Rankings
Dependent packages count: 10.0%
Downloads: 14.7%
Stargazers count: 14.8%
Average: 18.2%
Dependent repos count: 21.7%
Forks count: 29.8%
Maintainers (1)
Last synced: 6 months ago

Dependencies

.github/workflows/black.yml actions
  • actions/checkout v2 composite
  • psf/black stable composite
.github/workflows/python-package.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
.github/workflows/python-publish.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
  • pypa/gh-action-pypi-publish 27b31702a0e7fc50959f5ad993c78deac1bdfc29 composite
requirements.txt pypi
  • numba *
  • numpy *
  • torch *
setup.py pypi
  • torch *