https://github.com/ivorytower152/denoising-diffusion-pytorch

Implementation of Denoising Diffusion Probabilistic Model in Pytorch

https://github.com/ivorytower152/denoising-diffusion-pytorch

Science Score: 10.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.4%) to scientific vocabulary
Last synced: 5 months ago · JSON representation

Repository

Implementation of Denoising Diffusion Probabilistic Model in Pytorch

Basic Info
  • Host: GitHub
  • Owner: IvoryTower152
  • License: mit
  • Default Branch: main
  • Homepage:
  • Size: 2.33 MB
Statistics
  • Stars: 0
  • Watchers: 0
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Fork of lucidrains/denoising-diffusion-pytorch
Created about 3 years ago · Last pushed about 3 years ago
Metadata Files
Readme License

README.md

Denoising Diffusion Probabilistic Model, in Pytorch

Implementation of Denoising Diffusion Probabilistic Model in Pytorch. It is a new approach to generative modeling that may have the potential to rival GANs. It uses denoising score matching to estimate the gradient of the data distribution, followed by Langevin sampling to sample from the true distribution.

This implementation was transcribed from the official Tensorflow version here

Youtube AI Educators - Yannic Kilcher | AI Coffeebreak with Letitia | Outlier

Flax implementation from YiYi Xu

Annotated code by Research Scientists / Engineers from 🤗 Huggingface

Update: Turns out none of the technicalities really matters at all | "Cold Diffusion" paper | Muse

PyPI version

Install

bash $ pip install denoising_diffusion_pytorch

Usage

```python import torch from denoisingdiffusionpytorch import Unet, GaussianDiffusion

model = Unet( dim = 64, dim_mults = (1, 2, 4, 8) )

diffusion = GaussianDiffusion( model, imagesize = 128, timesteps = 1000, # number of steps losstype = 'l1' # L1 or L2 )

trainingimages = torch.rand(8, 3, 128, 128) # images are normalized from 0 to 1 loss = diffusion(trainingimages) loss.backward()

after a lot of training

sampledimages = diffusion.sample(batchsize = 4) sampled_images.shape # (4, 3, 128, 128) ```

Or, if you simply want to pass in a folder name and the desired image dimensions, you can use the Trainer class to easily train a model.

```python from denoisingdiffusionpytorch import Unet, GaussianDiffusion, Trainer

model = Unet( dim = 64, dim_mults = (1, 2, 4, 8) ).cuda()

diffusion = GaussianDiffusion( model, imagesize = 128, timesteps = 1000, # number of steps samplingtimesteps = 250, # number of sampling timesteps (using ddim for faster inference [see citation for ddim paper]) loss_type = 'l1' # L1 or L2 ).cuda()

trainer = Trainer( diffusion, 'path/to/your/images', trainbatchsize = 32, trainlr = 8e-5, trainnumsteps = 700000, # total training steps gradientaccumulateevery = 2, # gradient accumulation steps emadecay = 0.995, # exponential moving average decay amp = True # turn on mixed precision )

trainer.train() ```

Samples and model checkpoints will be logged to ./results periodically

Multi-GPU Training

The Trainer class is now equipped with 🤗 Accelerator. You can easily do multi-gpu training in two steps using their accelerate CLI

At the project root directory, where the training script is, run

python $ accelerate config

Then, in the same directory

python $ accelerate launch train.py

Miscellaneous

1D Sequence

By popular request, a 1D Unet + Gaussian Diffusion implementation. You will have to do the training code yourself

```python import torch from denoisingdiffusionpytorch import Unet1D, GaussianDiffusion1D

model = Unet1D( dim = 64, dim_mults = (1, 2, 4, 8), channels = 32 )

diffusion = GaussianDiffusion1D( model, seqlength = 128, timesteps = 1000, objective = 'predv' )

trainingseq = torch.rand(8, 32, 128) # features are normalized from 0 to 1 loss = diffusion(trainingseq) loss.backward()

after a lot of training

sampledseq = diffusion.sample(batchsize = 4) sampled_seq.shape # (4, 32, 128) ```

Citations

bibtex @inproceedings{NEURIPS2020_4c5bcfec, author = {Ho, Jonathan and Jain, Ajay and Abbeel, Pieter}, booktitle = {Advances in Neural Information Processing Systems}, editor = {H. Larochelle and M. Ranzato and R. Hadsell and M.F. Balcan and H. Lin}, pages = {6840--6851}, publisher = {Curran Associates, Inc.}, title = {Denoising Diffusion Probabilistic Models}, url = {https://proceedings.neurips.cc/paper/2020/file/4c5bcfec8584af0d967f1ab10179ca4b-Paper.pdf}, volume = {33}, year = {2020} }

bibtex @InProceedings{pmlr-v139-nichol21a, title = {Improved Denoising Diffusion Probabilistic Models}, author = {Nichol, Alexander Quinn and Dhariwal, Prafulla}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {8162--8171}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/nichol21a/nichol21a.pdf}, url = {https://proceedings.mlr.press/v139/nichol21a.html}, }

bibtex @inproceedings{kingma2021on, title = {On Density Estimation with Diffusion Models}, author = {Diederik P Kingma and Tim Salimans and Ben Poole and Jonathan Ho}, booktitle = {Advances in Neural Information Processing Systems}, editor = {A. Beygelzimer and Y. Dauphin and P. Liang and J. Wortman Vaughan}, year = {2021}, url = {https://openreview.net/forum?id=2LdBqxc1Yv} }

bibtex @article{Choi2022PerceptionPT, title = {Perception Prioritized Training of Diffusion Models}, author = {Jooyoung Choi and Jungbeom Lee and Chaehun Shin and Sungwon Kim and Hyunwoo J. Kim and Sung-Hoon Yoon}, journal = {ArXiv}, year = {2022}, volume = {abs/2204.00227} }

bibtex @article{Karras2022ElucidatingTD, title = {Elucidating the Design Space of Diffusion-Based Generative Models}, author = {Tero Karras and Miika Aittala and Timo Aila and Samuli Laine}, journal = {ArXiv}, year = {2022}, volume = {abs/2206.00364} }

bibtex @article{Song2021DenoisingDI, title = {Denoising Diffusion Implicit Models}, author = {Jiaming Song and Chenlin Meng and Stefano Ermon}, journal = {ArXiv}, year = {2021}, volume = {abs/2010.02502} }

bibtex @misc{chen2022analog, title = {Analog Bits: Generating Discrete Data using Diffusion Models with Self-Conditioning}, author = {Ting Chen and Ruixiang Zhang and Geoffrey Hinton}, year = {2022}, eprint = {2208.04202}, archivePrefix = {arXiv}, primaryClass = {cs.CV} }

bibtex @article{Qiao2019WeightS, title = {Weight Standardization}, author = {Siyuan Qiao and Huiyu Wang and Chenxi Liu and Wei Shen and Alan Loddon Yuille}, journal = {ArXiv}, year = {2019}, volume = {abs/1903.10520} }

bibtex @article{Salimans2022ProgressiveDF, title = {Progressive Distillation for Fast Sampling of Diffusion Models}, author = {Tim Salimans and Jonathan Ho}, journal = {ArXiv}, year = {2022}, volume = {abs/2202.00512} }

bibtex @article{Ho2022ClassifierFreeDG, title = {Classifier-Free Diffusion Guidance}, author = {Jonathan Ho}, journal = {ArXiv}, year = {2022}, volume = {abs/2207.12598} }

bibtex @article{Sunkara2022NoMS, title = {No More Strided Convolutions or Pooling: A New CNN Building Block for Low-Resolution Images and Small Objects}, author = {Raja Sunkara and Tie Luo}, journal = {ArXiv}, year = {2022}, volume = {abs/2208.03641} }

bibtex @inproceedings{Jabri2022ScalableAC, title = {Scalable Adaptive Computation for Iterative Generation}, author = {A. Jabri and David J. Fleet and Ting Chen}, year = {2022} }

bibtex @article{Cheng2022DPMSolverPlusPlus, title = {DPM-Solver++: Fast Solver for Guided Sampling of Diffusion Probabilistic Models}, author = {Cheng Lu and Yuhao Zhou and Fan Bao and Jianfei Chen and Chongxuan Li and Jun Zhu}, journal = {NeuRips 2022 Oral}, year = {2022}, volume = {abs/2211.01095} }

Owner

  • Name: IvoryTower152
  • Login: IvoryTower152
  • Kind: user

Hold On To Happiness

GitHub Events

Total
Last Year