fft-conv-pytorch
Implementation of 1D, 2D, and 3D FFT convolutions in PyTorch. Much faster than direct convolutions for large kernel sizes.
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
✓Committers with academic emails
1 of 8 committers (12.5%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (8.1%) to scientific vocabulary
Keywords
Repository
Implementation of 1D, 2D, and 3D FFT convolutions in PyTorch. Much faster than direct convolutions for large kernel sizes.
Basic Info
Statistics
- Stars: 502
- Watchers: 7
- Forks: 61
- Open Issues: 7
- Releases: 8
Topics
Metadata Files
README.md
fft-conv-pytorch
Implementation of 1D, 2D, and 3D FFT convolutions in PyTorch.
* Faster than direct convolution for large kernels.
* Much slower than direct convolution for small kernels.
* In my local tests, FFT convolution is faster when the kernel has >100 or so elements.
* Dependent on machine and PyTorch version.
* Also see benchmarks below.
Install
Using pip:
bash
pip install fft-conv-pytorch
From source:
bash
git clone https://github.com/fkodom/fft-conv-pytorch.git
cd fft-conv-pytorch
pip install .
Example Usage
```python import torch from fftconvpytorch import fft_conv, FFTConv1d
Create dummy data.
Data shape: (batch, channels, length)
Kernel shape: (outchannels, inchannels, kernel_size)
Bias shape: (out channels, )
For ordinary 1D convolution, simply set batch=1.
signal = torch.randn(3, 3, 1024 * 1024) kernel = torch.randn(2, 3, 128) bias = torch.randn(2)
Functional execution. (Easiest for generic use cases.)
out = fft_conv(signal, kernel, bias=bias)
Object-oriented execution. (Requires some extra work, since the
defined classes were designed for use in neural networks.)
fftconv = FFTConv1d(3, 2, 128, bias=True) fftconv.weight = torch.nn.Parameter(kernel) fftconv.bias = torch.nn.Parameter(bias) out = fftconv(signal) ```
Benchmarks
Benchmarking FFT convolution against the direct convolution from PyTorch in 1D, 2D, and 3D. The exact times are heavily dependent on your local machine, but relative scaling with kernel size is always the same.
Dimensions | Input Size | Input Channels | Output Channels | Bias | Padding | Stride | Dilation -----------|--------------|----------------|-----------------|------|---------|--------|--------- 1 | (4096) | 4 | 4 | True | 0 | 1 | 1 2 | (512, 512) | 4 | 4 | True | 0 | 1 | 1 3 | (64, 64, 64) | 4 | 4 | True | 0 | 1 | 1

Owner
- Name: Frank Odom
- Login: fkodom
- Kind: user
- Location: Huntsville, AL
- Company: Plainsight
- Repositories: 7
- Profile: https://github.com/fkodom
Director of Innovation at Plainsight (@sixgill). I like neural nets, and neural nets like me.
Citation (CITATION.cff)
# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!
cff-version: 1.2.0
title: fkodom/fft-conv-pytorch
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- given-names: Frank Odom
name-particle: Frank
family-names: Odom
name-suffix: III
email: frank.odom.iii@gmail.com
affiliation: Plainsight
GitHub Events
Total
- Watch event: 31
- Pull request review comment event: 1
- Pull request review event: 2
- Fork event: 3
Last Year
- Watch event: 31
- Pull request review comment event: 1
- Pull request review event: 2
- Fork event: 3
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Frank Odom | f****i@g****m | 25 |
| Frank Odom | f****m@p****i | 9 |
| Frank Odom | f****m@r****m | 5 |
| aretor | a****h@g****m | 4 |
| Chin Yun Yu | y****1@g****m | 4 |
| Alex Hagen | a****n@p****v | 4 |
| papkov | m****v@g****m | 3 |
| Frank Odom | f****i@m****m | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 14
- Total pull requests: 11
- Average time to close issues: 8 days
- Average time to close pull requests: 4 months
- Total issue authors: 14
- Total pull request authors: 6
- Average comments per issue: 2.57
- Average comments per pull request: 2.0
- Merged pull requests: 8
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 1
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 1
- Pull request authors: 0
- Average comments per issue: 0.0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- libonwpu (1)
- OasisArtisan (1)
- williamlzw (1)
- StephenHogg (1)
- hmaarrfk (1)
- Ti-Oluwanimi (1)
- yoyololicon (1)
- vaesl (1)
- fshamsafar (1)
- aminaab96 (1)
- dwromero (1)
- lim1011 (1)
- RobinhoodKi (1)
- jelly114514 (1)
Pull Request Authors
- fkodom (3)
- yoyololicon (3)
- aretor (2)
- alexhagen (1)
- papkov (1)
- antonfrancois (1)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- numpy *
- actions/checkout v2 composite
- actions/setup-python v2 composite
- pypa/gh-action-pypi-publish release/v1 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite