onediff
OneDiff: An out-of-the-box acceleration library for diffusion models.
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (9.4%) to scientific vocabulary
Keywords
Keywords from Contributors
Repository
OneDiff: An out-of-the-box acceleration library for diffusion models.
Basic Info
- Host: GitHub
- Owner: siliconflow
- License: apache-2.0
- Language: Jupyter Notebook
- Default Branch: main
- Homepage: https://github.com/siliconflow/onediff/wiki
- Size: 114 MB
Statistics
- Stars: 1,927
- Watchers: 41
- Forks: 124
- Open Issues: 129
- Releases: 10
Topics
Metadata Files
README.md
| Documentation | Community | Contribution | Discord |
onediff is an out-of-the-box acceleration library for diffusion models, it provides: - Out-of-the-box acceleration for popular UIs/libs(such as HF diffusers and ComfyUI) - PyTorch code compilation tools and strong optimized GPU Kernels for diffusion models
News
- [2024/07/23] :rocket: Up to 1.7x Speedup for Kolors: Kolors Acceleration Report
- [2024/06/18] :rocket: Acceleration for DiT models: SD3 Acceleration Report, PixArt Acceleration Report, and Latte Acceleration Report
- [2024/04/13] :rocket: OneDiff 1.0 is released (Acceleration of SD & SVD with one line of code)
- [2024/01/12] :rocket: Accelerating Stable Video Diffusion 3x faster with OneDiff DeepCache + Int8
- [2023/12/19] :rocket: Accelerating SDXL 3x faster with DeepCache and OneDiff
Hiring
We're hiring! If you are interested in working on onediff at SiliconFlow, we have roles open for Interns and Engineers in Beijing (near Tsinghua University).
If you have contributed significantly to open-source software and are interested in remote work, you can contact us at talent@siliconflow.cn with onediff in the email title.
Documentation
onediff is the abbreviation of "one line of code to accelerate diffusion models".
Use with HF diffusers and ComfyUI
Performance comparison

SDXL E2E time
- Model stabilityai/stable-diffusion-xl-base-1.0;
- Image size 1024*1024, batch size 1, steps 30;
- NVIDIA A100 80G SXM4;

SVD E2E time
- Model stabilityai/stable-video-diffusion-img2vid-xt;
- Image size 576*1024, batch size 1, steps 25, decoder chunk size 5;
- NVIDIA A100 80G SXM4;

Note that we haven't got a way to run SVD with TensorRT on Feb 29 2024.
Quality Evaluation
We also maintain a repository for benchmarking the quality of generation after acceleration: odeval
Community and Support
- Create an issue
- Chat in Discord:
- Community and Feedback
Installation
0. OS and GPU Compatibility
- Linux
- If you want to use onediff on Windows, please use it under WSL.
- The guide to install onediff in WSL2.
- NVIDIA GPUs
1. Install torch and diffusers
Note: You can choose the latest versions you want for diffusers or transformers.
python3 -m pip install "torch" "transformers==4.27.1" "diffusers[torch]==0.19.3"
2. Install a compiler backend
When considering the choice between OneFlow and Nexfort, either one is optional, and only one is needed.
For DiT structural models or H100 devices, it is recommended to use Nexfort.
For all other cases, it is recommended to use OneFlow. Note that optimizations within OneFlow will gradually transition to Nexfort in the future.
Nexfort
Install Nexfort is Optional. The detailed introduction of Nexfort is here.
bash
python3 -m pip install -U torch==2.3.0 torchvision==0.18.0 torchaudio==2.3.0 torchao==0.1
python3 -m pip install -U nexfort
OneFlow
Install OneFlow is Optional.
NOTE: We have updated OneFlow frequently for onediff, so please install OneFlow by the links below.
- CUDA 11.8
For NA/EU users
bash
python3 -m pip install -U --pre oneflow -f https://github.com/siliconflow/oneflow_releases/releases/expanded_assets/community_cu118
For CN users
bash
python3 -m pip install -U --pre oneflow -f https://oneflow-pro.oss-cn-beijing.aliyuncs.com/branch/community/cu118
Click to get OneFlow packages for other CUDA versions.
- **CUDA 12.1** For NA/EU users ```bash python3 -m pip install -U --pre oneflow -f https://github.com/siliconflow/oneflow_releases/releases/expanded_assets/community_cu122 ``` For CN users ```bash python3 -m pip install -U --pre oneflow -f https://oneflow-pro.oss-cn-beijing.aliyuncs.com/branch/community/cu122 ``` - **CUDA 12.2** For NA/EU users ```bash python3 -m pip install -U --pre oneflow -f https://github.com/siliconflow/oneflow_releases/releases/expanded_assets/community_cu122 ``` For CN users ```bash python3 -m pip install -U --pre oneflow -f https://oneflow-pro.oss-cn-beijing.aliyuncs.com/branch/community/cu122 ```3. Install onediff
- From PyPI
python3 -m pip install --pre onediff - From source
git clone https://github.com/siliconflow/onediff.gitcd onediff && python3 -m pip install -e .Or install for development: ``` # install for dev cd onediff && python3 -m pip install -e '.[dev]'
code formatting and linting
pip3 install pre-commit pre-commit install pre-commit run --all-files ```
NOTE: If you intend to utilize plugins for ComfyUI/StableDiffusion-WebUI, we highly recommend installing OneDiff from the source rather than PyPI. This is necessary as you'll need to manually copy (or create a soft link) for the relevant code into the extension folder of these UIs/Libs.
More about onediff
Architecture

Features
| Functionality | Details | |----------------|----------------------------| | Compiling Time | About 1 minute (SDXL) | | Deployment Methods | Plug and Play | | Dynamic Image Size Support | Support with no overhead | | Model Support | SD1.5~2.1, SDXL, SDXL Turbo, etc. | | Algorithm Support | SD standard workflow, LoRA, ControlNet, SVD, InstantID, SDXL Lightning, etc. | | SD Framework Support | ComfyUI, Diffusers, SD-webui | | Save & Load Accelerated Models | Yes | | Time of LoRA Switching | Hundreds of milliseconds | | LoRA Occupancy | Tens of MB to hundreds of MB. | | Device Support | NVIDIA GPU 3090 RTX/4090 RTX/A100/A800/A10 etc. (Compatibility with Ascend in progress) |
Acceleration for State-of-the-art models
onediff supports the acceleration for SOTA models. * stable: release for public usage, and has long-term support; * beta: release for professional usage, and has long-term support; * alpha: early release for expert usage, and should be careful to use;
| AIGC Type | Models | HF diffusers | | ComfyUI | | SD web UI | | | --------- | --------------------------- | ------------ | ---------- | --------- | ---------- | --------- | ---------- | | | | Community | Enterprise | Community | Enterprise | Community | Enterprise | | Image | SD 1.5 | stable | stable | stable | stable | stable | stable | | | SD 2.1 | stable | stable | stable | stable | stable | stable | | | SDXL | stable | stable | stable | stable | stable | stable | | | LoRA | stable | | stable | | stable | | | | ControlNet | stable | | stable | | | | | | SDXL Turbo | stable | | stable | | | | | | LCM | stable | | stable | | | | | | SDXL DeepCache | alpha | alpha | alpha | alpha | | | | | InstantID | beta | | beta | | | | | Video | SVD(stable Video Diffusion) | stable | stable | stable | stable | | | | | SVD DeepCache | alpha | alpha | alpha | alpha | | |
Acceleration for production environment
PyTorch Module compilation
- compilation with oneflow_compile #### Avoid compilation time for new input shape
- Support Multi-resolution input #### Avoid compilation time for online serving Compile and save the compiled result offline, then load it online for serving
- Save and Load the compiled graph
- Compile at one device(such as device 0), then use the compiled result to other device(such as device 1~7). Change device of the compiled graph to do multi-process serving #### Distributed Run If you want to do distributed inference, you can use onediff's compiler to do single-device acceleration in a distributed inference engine such as xDiT
OneDiff Enterprise Solution
If you need Enterprise-level Support for your system or business, you can email us at contact@siliconflow.com, or contact us through the website: https://siliconflow.cn/pricing | | Onediff Enterprise Solution | | -------------------------------------------------------- | ------------------------------------------------ | | More extreme compiler optimization for diffusion process | Usually another 20%~30% or more performance gain | | End-to-end workflow speedup solutions | Sometimes 200%~300% performance gain | | End-to-end workflow deployment solutions | Workflow to online model API | | Technical support for deployment | High priority support |
Citation
bibtex
@misc{2022onediff,
author={OneDiff Contributors},
title = {OneDiff: An out-of-the-box acceleration library for diffusion models},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/siliconflow/onediff}}
}
Owner
- Name: siliconflow
- Login: siliconflow
- Kind: organization
- Repositories: 1
- Profile: https://github.com/siliconflow
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - family-names: "Cai" given-names: "Shenghang" affiliation: SiliconFlow orcid: "https://orcid.org/0009-0003-7397-1203" title: "OneDiff: An out-of-the-box acceleration library for diffusion models" version: 1.1.0 date-released: 2024-06-7 url: "https://github.com/siliconflow/onediff"
GitHub Events
Total
- Create event: 25
- Issues event: 44
- Watch event: 270
- Delete event: 37
- Issue comment event: 96
- Push event: 120
- Gollum event: 9
- Pull request review comment event: 97
- Pull request review event: 88
- Pull request event: 48
- Fork event: 29
Last Year
- Create event: 25
- Issues event: 44
- Watch event: 270
- Delete event: 37
- Issue comment event: 96
- Push event: 120
- Gollum event: 9
- Pull request review comment event: 97
- Pull request review event: 88
- Pull request event: 48
- Fork event: 29
Committers
Last synced: 8 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Xiaoyu Xu | x****k@g****m | 178 |
| Shenghang Tsai | j****r@g****m | 85 |
| FengWen | 1****u | 77 |
| Wang Yi | 5****d | 57 |
| Li Xiang | 5****6 | 42 |
| binbinHan | h****n@1****m | 32 |
| Houjiang Chen | c****g@g****m | 26 |
| Yao Chi | l****r@u****t | 26 |
| C | i****i@g****m | 14 |
| Li Junliang | 1****G | 11 |
| Peiyuan Liu | 7****6 | 11 |
| Jianhua Zheng | z****a@o****g | 7 |
| WangXingyu | 1****7@q****m | 4 |
| Haoyang Ma | h****t@g****m | 3 |
| Miklos Nagy | m****y@g****m | 3 |
| rewbs | r****n@s****g | 3 |
| Batuhan Taskaya | b****n@p****g | 3 |
| Luyang | f****7@1****m | 2 |
| Ikko Eltociear Ashimine | e****r@g****m | 1 |
| Oleh | o****7@g****m | 1 |
| Xu Zhang | 4****9 | 1 |
| caish | 6****i | 1 |
| daquexian | d****6@g****m | 1 |
| fmk345 | 7****5 | 1 |
| jingwen bai | 3****n | 1 |
| kouyan | 1****1 | 1 |
| levi131 | 8****1 | 1 |
| rohitanshu | 8****u | 1 |
| sirouk | 8****k | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 4 months ago
All Time
- Total issues: 340
- Total pull requests: 481
- Average time to close issues: 29 days
- Average time to close pull requests: 14 days
- Total issue authors: 197
- Total pull request authors: 32
- Average comments per issue: 3.79
- Average comments per pull request: 0.53
- Merged pull requests: 379
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 54
- Pull requests: 34
- Average time to close issues: 15 days
- Average time to close pull requests: 11 days
- Issue authors: 48
- Pull request authors: 9
- Average comments per issue: 0.59
- Average comments per pull request: 1.41
- Merged pull requests: 26
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- strint (18)
- zhangvia (11)
- isidentical (9)
- onefish51 (9)
- jackalcooper (8)
- cjt222 (7)
- Amber0130 (6)
- Shiroha-Key (6)
- CuddleSabe (5)
- HydrogenQAQ (5)
- lovejing0306 (4)
- lss15151161 (4)
- songh11 (4)
- lijunliangTG (4)
- xiangcp (3)
Pull Request Authors
- ccssu (148)
- strint (124)
- marigoold (117)
- jackalcooper (100)
- lixiang007666 (88)
- clackhan (52)
- hjchen2 (40)
- doombeaker (35)
- lijunliangTG (35)
- chengzeyi (29)
- nono-Sang (13)
- fpzh2011 (8)
- isidentical (7)
- XuZhang99 (6)
- fmk345 (4)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 2
-
Total downloads:
- pypi 2,669 last-month
- Total docker downloads: 924
-
Total dependent packages: 0
(may contain duplicates) -
Total dependent repositories: 1
(may contain duplicates) - Total versions: 478
- Total maintainers: 3
pypi.org: onediff
an out-of-the-box acceleration library for diffusion models
- Homepage: https://github.com/siliconflow/onediff
- Documentation: https://onediff.readthedocs.io/
- License: Apache-2.0
-
Latest release: 1.2.0
published over 1 year ago
Rankings
Maintainers (3)
pypi.org: onediffx
onediff extensions for diffusers
- Homepage: https://github.com/siliconflow/onediff
- Documentation: https://onediffx.readthedocs.io/
- License: Apache-2.0
-
Latest release: 1.2.0
published over 1 year ago
Rankings
Maintainers (2)
Dependencies
- actions/checkout v2 composite
- aliyun/acr-login v1 composite
- docker/build-push-action v2 composite
- docker/login-action v1 composite
- ${BASE_IMAGE} latest build
- chardet *
- diffusers ==0.19.3
- onediff *
- opencv-python ==4.8.0.76
- transformers ==4.27.1
- diffusers >=0.19.3
- onefx *
- torch *
- transformers >=4.27.1
- actions/checkout v4 composite
- aliyun/acr-login v1 composite
- ${ACR_ORG}/${MATRIX_IMAGE} latest
- ${ACR_ORG}/${SELENIUM_IMAGE} latest
- ${ACR_ORG}/${MATRIX_IMAGE} latest
- scikit-image ==0.22.0 test
- selenium ==4.15.2 test
- gitpython *