https://github.com/bytedance/infiniteyou

🔥 [ICCV 2025 Highlight] InfiniteYou: Flexible Photo Recrafting While Preserving Your Identity

https://github.com/bytedance/infiniteyou

Science Score: 36.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org, scholar.google
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.8%) to scientific vocabulary

Keywords

diffusers diffusion diffusion-transformer dit face flux iccv2025 identity-preserving image-editing image-generation personalization pytorch research text-to-image
Last synced: 5 months ago · JSON representation

Repository

🔥 [ICCV 2025 Highlight] InfiniteYou: Flexible Photo Recrafting While Preserving Your Identity

Basic Info
Statistics
  • Stars: 2,581
  • Watchers: 27
  • Forks: 285
  • Open Issues: 28
  • Releases: 0
Topics
diffusers diffusion diffusion-transformer dit face flux iccv2025 identity-preserving image-editing image-generation personalization pytorch research text-to-image
Created about 1 year ago · Last pushed 6 months ago
Metadata Files
Readme License

README.md

## InfiniteYou: Flexible Photo Recrafting While Preserving Your Identity [**Liming Jiang**](https://liming-jiang.com/)     [**Qing Yan**](https://scholar.google.com/citations?user=0TIYjPAAAAAJ)     [**Yumin Jia**](https://www.linkedin.com/in/yuminjia/)     [**Zichuan Liu**](https://scholar.google.com/citations?user=-H18WY8AAAAJ)     [**Hao Kang**](https://scholar.google.com/citations?user=VeTCSyEAAAAJ)     [**Xin Lu**](https://scholar.google.com/citations?user=mFC0wp8AAAAJ)
ByteDance Intelligent Creation
**ICCV 2025 (Highlight)**

teaser

Abstract: Achieving flexible and high-fidelity identity-preserved image generation remains formidable, particularly with advanced Diffusion Transformers (DiTs) like FLUX. We introduce *InfiniteYou (InfU)*, one of the earliest robust frameworks leveraging DiTs for this task. InfU addresses significant issues of existing methods, such as insufficient identity similarity, poor text-image alignment, and low generation quality and aesthetics. Central to InfU is InfuseNet, a component that injects identity features into the DiT base model via residual connections, enhancing identity similarity while maintaining generation capabilities. A multi-stage training strategy, including pretraining and supervised fine-tuning (SFT) with synthetic single-person-multiple-sample (SPMS) data, further improves text-image alignment, ameliorates image quality, and alleviates face copy-pasting. Extensive experiments demonstrate that InfU achieves state-of-the-art performance, surpassing existing baselines. In addition, the plug-and-play design of InfU ensures compatibility with various existing methods, offering a valuable contribution to the broader community.

🔥 News

  • [07/2025] 🔥 The paper of InfiniteYou is selected as ICCV 2025 (Highlight).

  • [06/2025] 🔥 The paper of InfiniteYou is accepted to ICCV 2025.

  • [04/2025] 🔥 The official ComfyUI node is released. Unofficial ComfyUI contributions are appreciated.

  • [04/2025] 🔥 Quantization and offloading options are provided to reduce the memory requirements for InfiniteYou-FLUX v1.0.

  • [03/2025] 🔥 The code, model, and demo of InfiniteYou-FLUX v1.0 are released.

  • [03/2025] 🔥 The project page of InfiniteYou is created.

  • [03/2025] 🔥 The paper of InfiniteYou is released on arXiv.

💡 Important Usage Tips

  • We released two model variants of InfiniteYou-FLUX v1.0: aes_stage2 and sim_stage1. The aes_stage2 is our model after SFT, which is used by default for better text-image alignment and aesthetics. For higher ID similarity, please try sim_stage1 (using --model_version to switch). More details can be found in our paper.

  • To better fit specific personal needs, we find that two arguments are highly useful to adjust:
    --infusenet_conditioning_scale (default: 1.0) and --infusenet_guidance_start (default: 0.0). Usually, you may NOT need to adjust them. If necessary, start by trying a slightly larger --infusenet_guidance_start (e.g., 0.1) only (especially helpful for sim_stage1). If still not satisfactory, then try a slightly smaller --infusenet_conditioning_scale (e.g., 0.9).

  • We also provided two LoRAs (Realism and Anti-blur) to enable additional usage flexibility. If needed, try Realism only first. They are entirely optional, which are examples to try but are NOT used in our paper.

  • If the generated gender does not align with your preferences, try adding specific words in the text prompt, such as 'a man', 'a woman', etc. We encourage users to use inclusive and respectful language.

:european_castle: Model Zoo

| InfiniteYou Version | Model Version | Base Model Trained with | Description |
| :---: | :---: | :---: | :---: | | InfiniteYou-FLUX v1.0 | aes_stage2 | FLUX.1-dev | Stage-2 model after SFT. Better text-image alignment and aesthetics. | | InfiniteYou-FLUX v1.0 | sim_stage1 | FLUX.1-dev | Stage-1 model before SFT. Higher identity similarity. |

🔧 Requirements and Installation

Dependencies

Simply run this one-line command to install (feel free to create a python3 virtual environment before you run):

bash pip install -r requirements.txt

Memory Requirements

  • Full-performance: The original bf16 model inference requires a peak VRAM of around 43GB.

  • Fast CPU offloading: By specifying only --cpu_offload in test.py, the peak VRAM is reduced to around 30GB with NO performance degradation.

  • 8-bit quantization: By specifying only --quantize_8bit in test.py, the peak VRAM is reduced to around 24GB with performance remaining very similar.

  • Combining fast CPU offloading and 8-bit quantization: By specifying both --cpu_offload and
    --quantize_8bit, the peak VRAM is further reduced to around 16GB with performance remaining very similar.

If you want to use our models but only have a GPU with even less VRAM, please further refer to Diffusers memory reduction tips, where some more aggressive strategies may be helpful. Community contributions are also welcome.

⚡️ Quick Inference

Local Inference Script

bash python test.py --id_image ./assets/examples/man.jpg --prompt "A man, portrait, cinematic" --out_results_dir ./results

Explanation of all the arguments (click to expand!) - Input and output: - `--id_image (str)`: The path to the input identity (ID) image. Default: `./assets/examples/man.jpg`. - `--prompt (str)`: The text prompt for image generation. Default: `A man, portrait, cinematic`. - `--out_results_dir (str)`: The path to the output directory to save the generated results. Default: `./results`. - `--control_image (str or None)`: The path to the control image \[*optional*\] to extract five facical keypoints to control the generation. Default: `None`. - `--base_model_path (str)`: The huggingface or local path to the base model. Default: `black-forest-labs/FLUX.1-dev`. - `--model_dir (str)`: The path to the InfiniteYou model directory. Default: `ByteDance/InfiniteYou`. - Version control: - `--infu_flux_version (str)`: InfiniteYou-FLUX version: currently only `v1.0` is supported. Default: `v1.0`. - `--model_version (str)`: The model variant to use: `aes_stage2` | `sim_stage1`. Default: `aes_stage2`. - General inference arguments: - `--cuda_device (int)`: The cuda device ID to use. Default: `0`. - `--seed (int)`: The seed for reproducibility (0 for random). Default: `0`. - `--guideance_scale (float)`: The guidance scale for the diffusion process. Default: `3.5`. - `--num_steps (int)`: The number of inference steps. Default: `30`. - InfiniteYou-specific arguments: - `--infusenet_conditioning_scale (float)`: The scale for the InfuseNet conditioning. Default: `1.0`. - `--infusenet_guidance_start (float)`: The start point for the InfuseNet guidance injection. Default: `0.0`. - `--infusenet_guidance_end (float)`: The end point for the InfuseNet guidance injection. Default: `1.0`. - Optional LoRAs: - `--enable_realism_lora (store_true)`: Whether to enable the Realism LoRA. Default: `False`. - `--enable_anti_blur_lora (store_true)`: Whether to enable the Anti-blur LoRA. Default: `False`. - Memory reduction options: - `--quantize_8bit (store_true)`: Whether to quantize the model to the 8-bit format. Default: `False`. - `--cpu_offload (store_true)`: Whether to use fast CPU offloading. Default: `False`.

Local Gradio Demo

bash python app.py

Online Hugging Face Demo

We appreciate the GPU grant from the Hugging Face team. You can also try our InfiniteYou-FLUX Hugging Face demo online.

ComfyUI Nodes

🆚 Comparison with State-of-the-Art Relevant Methods

comparative_results

Qualitative comparison results of InfU with the state-of-the-art baselines, FLUX.1-dev IP-Adapter and PuLID-FLUX. The identity similarity and text-image alignment of the results generated by FLUX.1-dev IP-Adapter (IPA) are inadequate. PuLID-FLUX generates images with decent identity similarity. However, it suffers from poor text-image alignment (Columns 1, 2, 4), and the image quality (e.g., bad hands in Column 5) and aesthetic appeal are degraded. In addition, the face copy-paste issue of PuLID-FLUX is evident (Column 5). In comparison, the proposed InfU outperforms the baselines across all dimensions.

⚙️ Plug-and-Play Property with Off-the-Shelf Popular Approaches

plug_and_play

InfU features a desirable plug-and-play design, compatible with many existing methods. It naturally supports base model replacement with any variants of FLUX.1-dev, such as FLUX.1-schnell for more efficient generation (e.g., in 4 steps). The compatibility with ControlNets and LoRAs provides more controllability and flexibility for customized tasks. Notably, the compatibility with OminiControl extends our potential for multi-concept personalization, such as interacted identity (ID) and object personalized generation. InfU is also compatible with IP-Adapter (IPA) for stylization of personalized images, producing decent results when injecting style references via IPA. Our plug-and-play feature may extend to even more approaches, providing valuable contributions to the broader community.

📜 Disclaimer and Licenses

The images used in this repository and related demos are sourced from consented subjects or generated by the models. These pictures are intended solely to showcase the capabilities of our research. If you have any concerns, please feel free to contact us, and we will promptly remove any inappropriate content.

The use of the released code, model, and demo must strictly adhere to the respective licenses. Our code is released under the Apache License 2.0, and our model is released under the Creative Commons Attribution-NonCommercial 4.0 International Public License for academic research purposes only. Any manual or automatic downloading of the face models from InsightFace, the FLUX.1-dev base model, LoRAs (Realism and Anti-blur), etc., must follow their original licenses and be used only for academic research purposes.

This research aims to positively impact the field of Generative AI. Any usage of this method must be responsible and comply with local laws. The developers do not assume any responsibility for any potential misuse.

🤗 Acknowledgments

We sincerely acknowledge the insightful discussions from Stathi Fotiadis, Min Jin Chong, Xiao Yang, Tiancheng Zhi, Jing Liu, and Xiaohui Shen. We genuinely appreciate the help from Jincheng Liang and Lu Guo with our user study and qualitative evaluation.

📖 Citation

If you find InfiniteYou useful for your research or applications, please cite our paper:

bibtex @inproceedings{jiang2025infiniteyou, title={{InfiniteYou}: Flexible Photo Recrafting While Preserving Your Identity}, author={Jiang, Liming and Yan, Qing and Jia, Yumin and Liu, Zichuan and Kang, Hao and Lu, Xin}, booktitle={ICCV}, year={2025} }

We also appreciate it if you could give a star :star: to this repository. Thanks a lot!

Owner

  • Name: Bytedance Inc.
  • Login: bytedance
  • Kind: organization
  • Location: Singapore

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 30
  • Total pull requests: 6
  • Average time to close issues: 3 days
  • Average time to close pull requests: 1 day
  • Total issue authors: 30
  • Total pull request authors: 6
  • Average comments per issue: 1.37
  • Average comments per pull request: 0.17
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 30
  • Pull requests: 6
  • Average time to close issues: 3 days
  • Average time to close pull requests: 1 day
  • Issue authors: 30
  • Pull request authors: 6
  • Average comments per issue: 1.37
  • Average comments per pull request: 0.17
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • AgustinJimenez (1)
  • mokby (1)
  • theLastWinner (1)
  • milsun (1)
  • jin1041 (1)
  • emerdem (1)
  • jmportilla-tio-magic (1)
  • Vigilence (1)
  • stormcenter (1)
  • jeerychao (1)
  • vuongminh1907 (1)
  • Tylersuard (1)
  • GraftingRayman (1)
  • justinmayer (1)
  • ronyyuan (1)
Pull Request Authors
  • petermg (1)
  • junmingF (1)
  • craigstjean (1)
  • azmenak (1)
  • dandinu (1)
  • GoGoPen (1)
Top Labels
Issue Labels
Pull Request Labels