https://github.com/ai-forever/kandinsky-2

Kandinsky 2 — multilingual text2image latent diffusion model

https://github.com/ai-forever/kandinsky-2

Science Score: 13.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (7.1%) to scientific vocabulary

Keywords

diffusion image-generation image2image inpainting ipython-notebook kandinsky outpainting text-to-image text2image

Keywords from Contributors

dalle russian russian-language transformer clip
Last synced: 5 months ago · JSON representation

Repository

Kandinsky 2 — multilingual text2image latent diffusion model

Basic Info
  • Host: GitHub
  • Owner: ai-forever
  • License: apache-2.0
  • Language: Jupyter Notebook
  • Default Branch: main
  • Homepage:
  • Size: 37.3 MB
Statistics
  • Stars: 2,798
  • Watchers: 47
  • Forks: 310
  • Open Issues: 83
  • Releases: 0
Topics
diffusion image-generation image2image inpainting ipython-notebook kandinsky outpainting text-to-image text2image
Created over 3 years ago · Last pushed almost 2 years ago
Metadata Files
Readme License

README.md

Kandinsky 2.2

Open In Colab — Inference example

Open In Colab — Fine-tuning with LoRA

Description:

Kandinsky 2.2 brings substantial improvements upon its predecessor, Kandinsky 2.1, by introducing a new, more powerful image encoder - CLIP-ViT-G and the ControlNet support.

The switch to CLIP-ViT-G as the image encoder significantly increases the model's capability to generate more aesthetic pictures and better understand text, thus enhancing the model's overall performance.

The addition of the ControlNet mechanism allows the model to effectively control the process of generating images. This leads to more accurate and visually appealing outputs and opens new possibilities for text-guided image manipulation.

Architecture details:

  • Text encoder (XLM-Roberta-Large-Vit-L-14) - 560M
  • Diffusion Image Prior — 1B
  • CLIP image encoder (ViT-bigG-14-laion2B-39B-b160k) - 1.8B
  • Latent Diffusion U-Net - 1.22B
  • MoVQ encoder/decoder - 67M

Сheckpoints:

  • Prior: A prior diffusion model mapping text embeddings to image embeddings
  • Text-to-Image / Image-to-Image: A decoding diffusion model mapping image embeddings to images
  • Inpainting: A decoding diffusion model mapping image embeddings and masked images to images
  • ControlNet-depth: A decoding diffusion model mapping image embedding and additional depth condition to images

Inference regimes

How to use:

Check our jupyter notebooks with examples in ./notebooks folder

1. text2image

python from kandinsky2 import get_kandinsky2 model = get_kandinsky2('cuda', task_type='text2img', model_version='2.2') images = model.generate_text2img( "red cat, 4k photo", decoder_steps=50, batch_size=1, h=1024, w=768, )

Kandinsky 2.1

Framework: PyTorch Huggingface space Open In Colab

Habr post

Demo

pip install "git+https://github.com/ai-forever/Kandinsky-2.git"

Model architecture:

Kandinsky 2.1 inherits best practicies from Dall-E 2 and Latent diffusion, while introducing some new ideas.

As text and image encoder it uses CLIP model and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation.

For diffusion mapping of latent spaces we use transformer with numlayers=20, numheads=32 and hidden_size=2048.

Other architecture parts:

  • Text encoder (XLM-Roberta-Large-Vit-L-14) - 560M
  • Diffusion Image Prior — 1B
  • CLIP image encoder (ViT-L/14) - 427M
  • Latent Diffusion U-Net - 1.22B
  • MoVQ encoder/decoder - 67M

Kandinsky 2.1 was trained on a large-scale image-text dataset LAION HighRes and fine-tuned on our internal datasets.

How to use:

Check our jupyter notebooks with examples in ./notebooks folder

1. text2image

python from kandinsky2 import get_kandinsky2 model = get_kandinsky2('cuda', task_type='text2img', model_version='2.1', use_flash_attention=False) images = model.generate_text2img( "red cat, 4k photo", num_steps=100, batch_size=1, guidance_scale=4, h=768, w=768, sampler='p_sampler', prior_cf_scale=4, prior_steps="5" )

prompt: "Einstein in space around the logarithm scheme"

2. image fuse

python from kandinsky2 import get_kandinsky2 from PIL import Image model = get_kandinsky2('cuda', task_type='text2img', model_version='2.1', use_flash_attention=False) images_texts = ['red cat', Image.open('img1.jpg'), Image.open('img2.jpg'), 'a man'] weights = [0.25, 0.25, 0.25, 0.25] images = model.mix_images( images_texts, weights, num_steps=150, batch_size=1, guidance_scale=5, h=768, w=768, sampler='p_sampler', prior_cf_scale=4, prior_steps="5" )

3. inpainting

```python from kandinsky2 import get_kandinsky2 from PIL import Image import numpy as np

model = getkandinsky2('cuda', tasktype='inpainting', modelversion='2.1', useflashattention=False) initimage = Image.open('img.jpg') mask = np.ones((768, 768), dtype=np.float32) mask[:,:550] = 0 images = model.generateinpainting( 'man 4k photo', initimage, mask, numsteps=150, batchsize=1, guidancescale=5, h=768, w=768, sampler='psampler', priorcfscale=4, prior_steps="5" ) ```

Kandinsky 2.0

Framework: PyTorch Huggingface space Open In Colab

Habr post

Demo

pip install "git+https://github.com/ai-forever/Kandinsky-2.git"

Model architecture:

It is a latent diffusion model with two multilingual text encoders: * mCLIP-XLMR 560M parameters * mT5-encoder-small 146M parameters

These encoders and multilingual training datasets unveil the real multilingual text-to-image generation experience!

Kandinsky 2.0 was trained on a large 1B multilingual set, including samples that we used to train Kandinsky.

In terms of diffusion architecture Kandinsky 2.0 implements UNet with 1.2B parameters.

Kandinsky 2.0 architecture overview:

How to use:

Check our jupyter notebooks with examples in ./notebooks folder

1. text2img

```python from kandinsky2 import get_kandinsky2

model = getkandinsky2('cuda', tasktype='text2img') images = model.generatetext2img('A teddy bear на красной площади', batchsize=4, h=512, w=512, numsteps=75, denoisedtype='dynamicthreshold', dynamicthresholdv=99.5, sampler='ddimsampler', ddimeta=0.05, guidancescale=10) ```

prompt: "A teddy bear на красной площади"

2. inpainting

```python from kandinsky2 import get_kandinsky2 from PIL import Image import numpy as np

model = getkandinsky2('cuda', tasktype='inpainting') initimage = Image.open('image.jpg') mask = np.ones((512, 512), dtype=np.float32) mask[100:] = 0 images = model.generateinpainting('Девушка в красном платье', initimage, mask, numsteps=50, denoisedtype='dynamicthreshold', dynamicthresholdv=99.5, sampler='ddimsampler', ddimeta=0.05, guidance_scale=10) ```

prompt: "Девушка в красном платье"

3. img2img

```python from kandinsky2 import get_kandinsky2 from PIL import Image

model = getkandinsky2('cuda', tasktype='img2img') initimage = Image.open('image.jpg') images = model.generateimg2img('кошка', initimage, strength=0.8, numsteps=50, denoisedtype='dynamicthreshold', dynamicthresholdv=99.5, sampler='ddimsampler', ddimeta=0.05, guidance_scale=10) ```

Authors

Owner

  • Name: AI Forever
  • Login: ai-forever
  • Kind: organization
  • Location: Armenia

Creating ML for the future. AI projects you already know. We are non-profit organization with members from all over the world.

GitHub Events

Total
  • Issues event: 4
  • Watch event: 81
  • Fork event: 15
Last Year
  • Issues event: 4
  • Watch event: 81
  • Fork event: 15

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 154
  • Total Committers: 7
  • Avg Commits per committer: 22.0
  • Development Distribution Score (DDS): 0.558
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Anton Razzhigaev 4****t 68
Shahmatov Arseniy 6****5 67
boomb0om i****1@g****m 6
Andrey Kuznetsov k****y@g****m 6
Denis d****v@g****m 5
Alex Wortega a****h@g****m 1
Anton a****n@M****l 1

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 90
  • Total pull requests: 15
  • Average time to close issues: 2 months
  • Average time to close pull requests: about 2 months
  • Total issue authors: 76
  • Total pull request authors: 14
  • Average comments per issue: 1.31
  • Average comments per pull request: 1.4
  • Merged pull requests: 3
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 3
  • Pull requests: 1
  • Average time to close issues: 6 minutes
  • Average time to close pull requests: N/A
  • Issue authors: 3
  • Pull request authors: 1
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • erelsgl (7)
  • chuck-ma (3)
  • MrlolDev (2)
  • dokluch (2)
  • ValkerN (2)
  • FurkanGozukara (2)
  • katze42 (2)
  • MichaelMonashev (2)
  • samitkumarsinha (1)
  • WASasquatch (1)
  • liangwq (1)
  • itsadarshms (1)
  • hanxiao (1)
  • crapthings (1)
  • Anton19780301 (1)
Pull Request Authors
  • gkamtzir (2)
  • MichaelMonashev (2)
  • kuznetsoffandrey (1)
  • ahmad88me (1)
  • eltociear (1)
  • XmYx (1)
  • chenxwh (1)
  • FurkanGozukara (1)
  • maximxlss (1)
  • AlexWortega (1)
  • mrsobakin (1)
  • boomb0om (1)
  • WojtekKowaluk (1)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

setup.py pypi
  • Pillow *
  • attrs *
  • blobfile *
  • einops *
  • filelock *
  • ftfy *
  • numpy *
  • omegaconf *
  • pytorch_lightning *
  • regex *
  • requests *
  • sentencepiece *
  • torch *
  • torchvision *
  • tqdm *
  • transformers ==4.23.1