adaptivediffusion

[NeurIPS'24] Training-Free Adaptive Diffusion with Bounded Difference Approximation Strategy

https://github.com/alpha-innovator/adaptivediffusion

Science Score: 36.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.5%) to scientific vocabulary

Keywords

adaptive-inference diffusion-models efficient-inference model-acceleration stable-diffusion training-free
Last synced: 6 months ago · JSON representation

Repository

[NeurIPS'24] Training-Free Adaptive Diffusion with Bounded Difference Approximation Strategy

Basic Info
  • Host: GitHub
  • Owner: Alpha-Innovator
  • License: apache-2.0
  • Language: Python
  • Default Branch: master
  • Homepage:
  • Size: 8.63 MB
Statistics
  • Stars: 71
  • Watchers: 3
  • Forks: 4
  • Open Issues: 1
  • Releases: 0
Topics
adaptive-inference diffusion-models efficient-inference model-acceleration stable-diffusion training-free
Created over 1 year ago · Last pushed about 1 year ago
Metadata Files
Readme Contributing License Code of conduct Citation

README.md

arXiv GitHub issues PRs Welcome

NeurIPS-2024: Noise Prediction Can Be Adaptively Skipped for Different Prompts Without Training!

[[Paper]](https://arxiv.org/pdf/2410.09873)    [[Project page]](https://jiakangyuan.github.io/AdaptiveDiffusion-project-page/)   [[Huggingface]](https://huggingface.co/datasets/HankYe/Sampled_AIGCBench_text2image_ar_0.625)



Introduction

This is the up-to-date official implementation of AdaptiveDiffusion in the paper, Training-free Adaptive Diffusion with Bounded Difference Approximation Strategy. AdaptiveDiffusion is a novel adaptive inference paradigm containing a third-order latent differential estimator to determine whether to reuse the noise prediction from previous timesteps for the denoising of the current timestep. The developed skipping strategy adaptively approximates the optimal skipping strategy for various prompts based on the third-order latent differential value.

AdaptiveDiffusion offers three core components:

  • Training-free adaptive diffusion acceleration pipelines from the step number reduction of noise predictions that makes different skipping paths for different prompts.
  • Unified skipping strategy for both image and video generation models.
  • Interchangeable noise schedulers for different diffusion speeds and output quality.

Installation

Please follow the installation to complete the installation. If the evaluation is required, clean-fid should be installed for images and videos.

pip install git+https://github.com/zhijian-liu/torchprofile datasets torchmetrics dominate clean-fid

Quickstart

Thanks to the unified inference pipelines in diffusers, it is easy to deploy the third-order estimator on various diffusion pipelines to achieve adaptive diffusion.

Step One

Select the target pipeline that you attempt to accelerate. For the comparison with original diffusion results, you can copy the pipeline classes to sparse_pipeline.

Step Two

Modify the pipeline you just copied into the sparse_pipeline. There are four places that need modification.

  1. Pipeline Initialization python class TargetPipeline( #... existing code... ): def __init__( #... existing code... threshold: float = 0.01, # default_threshold max_skip_steps: int = 4, # default max skipping time steps ) #... existing code... self.prev_latents = [] self.mask = [] self.diff_list = [] self.max_skip_steps = max_skip_steps self.threshold = threshold
  2. Estimator function design and Reset function definition in the target class. ```python class TargetPipeline(

    ... existing code...

    ):

    ... existing code...

    def estimateskipping(self, latent): prevlatent = self.prev_latents[-1]

    prev_diff = self.diff_list[-1]
    prev_prev_diff = self.diff_list[-2]
    cur_diff = (latent - prev_latent).abs().mean()
    self.diff_list.append(cur_diff)
    if len(self.mask) > 4 and not any(self.mask[-self.max_skip_steps:]):
        return True
    if abs((cur_diff + prev_prev_diff) / 2 - prev_diff) <= prev_diff * self.threshold:
        return False
    return True
    

    def resetcache(self): self.noisepred = None self.prevlatents = [] self.mask = [] self.difflist = []

    def call( #... existing code... ): #... existing code... ```

  3. Replace the denoising code. ```python class TargetPipeline(

    ... existing code...

    ):

    ... existing code...

    def call( #... existing code... ): #... existing code...

    with self.progress_bar(total=num_inference_steps) as progress_bar:
        #... existing code...
        # original: noise_pred = self.unet(...)
        # replaced with:
        ###### estimate whether to skip steps #######
        if len(self.prev_latents) <= 3:
            noise_pred = self.unet(...)[0]
            self.noise_pred = noise_pred
            if len(self.prev_latents) > 1:
                self.diff_list.append((self.prev_latents[-1] - self.prev_latents[-2]).abs().mean())
        else:
            if self.mask[-1] == True:
                noise_pred = self.unet(...)[0]
                self.noise_pred = noise_pred
            else:
                noise_pred = self.noise_pred
        #... existing code...
        latents = self.scheduler.step(...)[0]
    
        if len(self.prev_latents) >= 3:
            self.mask.append(self.estimate_skipping(latents))
        self.prev_latents.append(latents)
        #... existing code...
    

    ```

  4. Modify the inference code. ```python import sys sys.path.append('/path/to/examples/AdaptiveDiffusion') from acceleration.sparse_pipeline import TargetPipeline as AdaptiveTargetPipeline import torch

threshold = 0.01 maxskipsteps = 4 pipeline = AdaptiveTargetPipeline.frompretrained(..., threshold=threshold, maxskipsteps=maxskip_steps) pipeline.scheduler = ... # in case you want to try more schedulers pipeline.to("cuda") pipeline("An image of a squirrel in Picasso style").images[0] ```

Evaluation

To evaluate the generation quality of AdaptiveDiffusion, we follow Distrifuser to evaluate the generation similarity between the original and our adaptive diffusion model. After you generate all the images, you can use our script compute_metrics_image.py and compute_metrics_video.py to calculate PSNR, LPIPS and FID. The usage is python python scripts/compute_metrics_image.py --input_root0 $IMAGE_ROOT0 --input_root1 $IMAGE_ROOT1 where $IMAGE_ROOT0 and $IMAGE_ROOT1 are paths to the image folders you are trying to compare.

Evaluation on AIGCBench

For the evaluation on the image-to-video generation task, we randomly select 100 samples from the validation set of AIGCBench. The sample list is provided in Huggingface. After generating all the videos by generate_video.py, you can use our script compute_metrics_video.py to calculate PSNR, LPIPS and FVD. The usage is python python scripts/compute_metrics_video.py --input_root0 $VIDEO_ROOT0 --input_root1 $VIDEO_ROOT1 where $VIDEO_ROOT0 and $VIDEO_ROOT1 are paths to the video folders you are trying to compare.

Demo

You can also try our demo by python cd examples/AdaptiveDiffusion && python demo.py Then, open the URL displayed in the terminal (For example, http://127.0.0.1:7860) and you can change the model, seed, threshold, and so on in the WebUI. The additional package required for the demo is gradio, and you can use pip install gradio to install it.



Citation

bibtex @misc{adaptivediffusion24ye, author = {Hancheng Ye and Jiakang Yuan and Renqiu Xia and Xiangchao Yan and Tao Chen and Junchi Yan and Botian Shi and Bo Zhang}, title = {Training-Free Adaptive Diffusion with Bounded Difference Approximation Strategy}, year = {2024}, booktitle = {The Thirty-Eighth Annual Conference on Neural Information Processing Systems} }

Acknowledgements

We greatly acknowledge the authors of Distrifuser, Torchsparse, and Diffusers for their open-source codes. Visit the following links to access their more contributions.

Owner

  • Name: Alpha-Innovator Lab
  • Login: Alpha-Innovator
  • Kind: organization

Our mission is to explore the approaches and methodologies for enabling AI-Agents to achieve Level-4 (Innovator) capabilitie

GitHub Events

Total
  • Watch event: 7
  • Push event: 1
  • Fork event: 1
Last Year
  • Watch event: 7
  • Push event: 1
  • Fork event: 1

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 17
  • Total Committers: 2
  • Avg Commits per committer: 8.5
  • Development Distribution Score (DDS): 0.235
Past Year
  • Commits: 17
  • Committers: 2
  • Avg Commits per committer: 8.5
  • Development Distribution Score (DDS): 0.235
Top Committers
Name Email Commits
Hank Ye 3****e 13
JiakangYuan j****2@g****m 4

Issues and Pull Requests

Last synced: 9 months ago

All Time
  • Total issues: 2
  • Total pull requests: 0
  • Average time to close issues: about 23 hours
  • Average time to close pull requests: N/A
  • Total issue authors: 2
  • Total pull request authors: 0
  • Average comments per issue: 2.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 2
  • Pull requests: 0
  • Average time to close issues: about 23 hours
  • Average time to close pull requests: N/A
  • Issue authors: 2
  • Pull request authors: 0
  • Average comments per issue: 2.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

docker/diffusers-doc-builder/Dockerfile docker
  • ubuntu 20.04 build
docker/diffusers-flax-cpu/Dockerfile docker
  • ubuntu 20.04 build
docker/diffusers-flax-tpu/Dockerfile docker
  • ubuntu 20.04 build
docker/diffusers-onnxruntime-cpu/Dockerfile docker
  • ubuntu 20.04 build
docker/diffusers-onnxruntime-cuda/Dockerfile docker
  • nvidia/cuda 12.1.0-runtime-ubuntu20.04 build
docker/diffusers-pytorch-compile-cuda/Dockerfile docker
  • nvidia/cuda 12.1.0-runtime-ubuntu20.04 build
docker/diffusers-pytorch-cpu/Dockerfile docker
  • ubuntu 20.04 build
docker/diffusers-pytorch-cuda/Dockerfile docker
  • nvidia/cuda 12.1.0-runtime-ubuntu20.04 build
docker/diffusers-pytorch-xformers-cuda/Dockerfile docker
  • nvidia/cuda 12.1.0-runtime-ubuntu20.04 build
examples/advanced_diffusion_training/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • peft ==0.7.0
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/cogvideo/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.31.0
  • decord >=0.6.0
  • ftfy *
  • imageio-ffmpeg *
  • peft >=0.11.1
  • sentencepiece *
  • tensorboard *
  • torchvision *
  • transformers >=4.41.2
examples/consistency_distillation/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
  • webdataset *
examples/controlnet/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/controlnet/requirements_flax.txt pypi
  • Jinja2 *
  • datasets *
  • flax *
  • ftfy *
  • optax *
  • tensorboard *
  • torch *
  • torchvision *
  • transformers >=4.25.1
examples/controlnet/requirements_sd3.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
  • wandb *
examples/controlnet/requirements_sdxl.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
  • wandb *
examples/custom_diffusion/requirements.txt pypi
  • Jinja2 *
  • accelerate *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/dreambooth/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • peft ==0.7.0
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/dreambooth/requirements_flax.txt pypi
  • Jinja2 *
  • flax *
  • ftfy *
  • optax *
  • tensorboard *
  • torch *
  • torchvision *
  • transformers >=4.25.1
examples/dreambooth/requirements_flux.txt pypi
  • Jinja2 *
  • accelerate >=0.31.0
  • ftfy *
  • peft >=0.11.1
  • sentencepiece *
  • tensorboard *
  • torchvision *
  • transformers >=4.41.2
examples/dreambooth/requirements_sd3.txt pypi
  • Jinja2 *
  • accelerate >=0.31.0
  • ftfy *
  • peft ==0.11.1
  • sentencepiece *
  • tensorboard *
  • torchvision *
  • transformers >=4.41.2
examples/dreambooth/requirements_sdxl.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • peft ==0.7.0
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/instruct_pix2pix/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/kandinsky2_2/text_to_image/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/colossalai/requirement.txt pypi
  • Jinja2 *
  • diffusers *
  • ftfy *
  • tensorboard *
  • torch *
  • torchvision *
  • transformers *
examples/research_projects/consistency_training/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/diffusion_dpo/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • peft *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
  • wandb *
examples/research_projects/diffusion_orpo/requirements.txt pypi
  • accelerate *
  • datasets *
  • peft *
  • torchvision *
  • transformers *
  • wandb *
  • webdataset *
examples/research_projects/dreambooth_inpaint/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • diffusers ==0.9.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.21.0
examples/research_projects/gligen/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • diffusers *
  • fairscale *
  • ftfy *
  • scipy *
  • tensorboard *
  • timm *
  • torchvision *
  • transformers >=4.25.1
  • wandb *
examples/research_projects/intel_opts/textual_inversion/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • intel_extension_for_pytorch >=1.13
  • tensorboard *
  • torchvision *
  • transformers >=4.21.0
examples/research_projects/intel_opts/textual_inversion_dfq/requirements.txt pypi
  • accelerate *
  • ftfy *
  • modelcards *
  • neural-compressor *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.0
examples/research_projects/lora/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/multi_subject_dreambooth/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/multi_subject_dreambooth_inpainting/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets >=2.16.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
  • wandb >=0.16.1
examples/research_projects/multi_token_textual_inversion/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/multi_token_textual_inversion/requirements_flax.txt pypi
  • Jinja2 *
  • flax *
  • ftfy *
  • optax *
  • tensorboard *
  • torch *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/onnxruntime/text_to_image/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • modelcards *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/onnxruntime/textual_inversion/requirements.txt pypi
  • accelerate >=0.16.0
  • ftfy *
  • modelcards *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/onnxruntime/unconditional_image_generation/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • tensorboard *
  • torchvision *
examples/research_projects/pytorch_xla/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets >=2.19.1
  • ftfy *
  • peft ==0.7.0
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/realfill/requirements.txt pypi
  • Jinja2 ==3.1.4
  • accelerate ==0.23.0
  • diffusers ==0.20.1
  • ftfy ==6.1.1
  • peft ==0.5.0
  • tensorboard ==2.14.0
  • torch ==2.2.0
  • torchvision >=0.16
  • transformers ==4.38.0
examples/t2i_adapter/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • safetensors *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
  • wandb *
examples/text_to_image/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets >=2.19.1
  • ftfy *
  • peft ==0.7.0
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/text_to_image/requirements_flax.txt pypi
  • Jinja2 *
  • datasets *
  • flax *
  • ftfy *
  • optax *
  • tensorboard *
  • torch *
  • torchvision *
  • transformers >=4.25.1
examples/text_to_image/requirements_sdxl.txt pypi
  • Jinja2 *
  • accelerate >=0.22.0
  • datasets *
  • ftfy *
  • peft ==0.7.0
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/textual_inversion/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/textual_inversion/requirements_flax.txt pypi
  • Jinja2 *
  • flax *
  • ftfy *
  • optax *
  • tensorboard *
  • torch *
  • torchvision *
  • transformers >=4.25.1
examples/unconditional_image_generation/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • torchvision *
examples/vqgan/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • numpy *
  • tensorboard *
  • timm *
  • torchvision *
  • tqdm *
  • transformers >=4.25.1
examples/wuerstchen/text_to_image/requirements.txt pypi
  • accelerate >=0.16.0
  • bitsandbytes *
  • deepspeed *
  • peft >=0.6.0
  • torchvision *
  • transformers >=4.25.1
  • wandb *
pyproject.toml pypi
setup.py pypi
  • deps *