https://github.com/carlosholivan/audiogenerationdiffusion
State-of-the-art of Audio Generation with Diffusion Models
Science Score: 10.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
○codemeta.json file
-
○.zenodo.json file
-
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (6.0%) to scientific vocabulary
Keywords
audio
audio-generation
deep-learning
diffusion
diffusion-models
machine-learning
multimodal-deep-learning
Last synced: 5 months ago
·
JSON representation
Repository
State-of-the-art of Audio Generation with Diffusion Models
Basic Info
- Host: GitHub
- Owner: carlosholivan
- Default Branch: master
- Homepage: https://carlosholivan.github.io/AudioGenerationDiffusion/
- Size: 179 KB
Statistics
- Stars: 3
- Watchers: 2
- Forks: 1
- Open Issues: 0
- Releases: 0
Topics
audio
audio-generation
deep-learning
diffusion
diffusion-models
machine-learning
multimodal-deep-learning
Created about 3 years ago
· Last pushed about 3 years ago
https://github.com/carlosholivan/AudioGenerationDiffusion/blob/master/
# AUDIO GENERATION WITH DIFFUSION MODELS This repository is maintained by [**Carlos Hernndez-Olivn**](https://carlosholivan.github.io/index.html)(carloshero@unizar.es) and it presents the State of the Art of Audio Generation with Diffusion models. Make a pull request if you want to contribute to this references list. All the images belong to their corresponding authors. ## Table of Contents ## 1. [Papers](#papers) - [2022](#2022) - [2021](#2021) 2. [Diffusion theory papers](#theory) 3. [Resources](#resources) ## 1. Papers ### 2022 #### MM-DiffusionRuan, Ludan and Ma, Yiyang and Yang, Huan and He, Huiguo and Liu, Bei and Fu, Jianlong and Yuan, Nicholas Jing and Jin, Qin and Guo, Baining. (2022). MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation. [Paper](https://arxiv.org/pdf/2212.09478v1.pdf) [GitHub](https://github.com/researchmm/MM-Diffusion) ### 2021 #### Diffwave (ICLR 2021)
Kong, Z., Ping, W., Huang, J., Zhao, K., & Catanzaro, B. (2020). Diffwave: A versatile diffusion model for audio synthesis. ICLR 2021. [Paper](https://arxiv.org/pdf/2009.09761.pdf) ## 2. Theory ## 3. Resources - [Lil's blog diffusion](https://lilianweng.github.io/posts/2021-07-11-diffusion-models/) [↑ Table of Contents](#index)
Owner
- Name: Carlos Hernández Oliván
- Login: carlosholivan
- Kind: user
- Location: Zaragoza, Spain
- Company: Universidad de Zaragoza
- Website: carlosholivan.github.io
- Twitter: carlosheroliv
- Repositories: 7
- Profile: https://github.com/carlosholivan
PhD student researching in Machine Learning and Music.
Ruan, Ludan and Ma, Yiyang and Yang, Huan and He, Huiguo and Liu, Bei and Fu, Jianlong and Yuan, Nicholas Jing and Jin, Qin and Guo, Baining. (2022). MM-Diffusion: Learning Multi-Modal Diffusion Models for Joint Audio and Video Generation.
[Paper](https://arxiv.org/pdf/2212.09478v1.pdf) [GitHub](https://github.com/researchmm/MM-Diffusion)
###
Kong, Z., Ping, W., Huang, J., Zhao, K., & Catanzaro, B. (2020). Diffwave: A versatile diffusion model for audio synthesis. ICLR 2021.
[Paper](https://arxiv.org/pdf/2009.09761.pdf)
##