Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.3%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: anonymousopenscience
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Size: 3.66 MB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created almost 2 years ago · Last pushed almost 2 years ago
Metadata Files
Readme Contributing License Code of conduct Citation

README.md

FairDiffusion

Despite the strong performance of these generative models, it remains an open question whether the quality of image generation is consistent across different demographic subgroups. To tackle these biases, we introduce FairDiffusion, an equity-aware latent diffusion model that enhances fairness in medical image generation via Fair Bayesian Perturbation.

Requirements

To install the prerequisites, run: conda env create -f environment.yml or pip install -r requirements.txt

FairGenMed Dataset

We present FairGenMed, the first dataset for studying fairness of medical generative models, providing detailed quantitative measurements of multiple clinical conditions to investigate the semantic correlation between text prompts and anatomical regions across various demographic subgroups.

The dataset can be accessed via this link. This dataset can only be used for non-commercial research purposes. At no time, the dataset shall be used for clinical decisions or patient care. The data use license is CC BY-NC-ND 4.0.

Our dataset includes 10,000 subjects for glaucoma detection with comprehensive demographic identity attributes including age, gender, race, ethnicity, preferred language, and marital status. Each subject has one Scanning Laser Ophthalmoscopy (SLO) fundus photo and one npz file. The size of SLO fundus photos is 512 x 664.

The NPZ files have the following attributes glaucoma: the label of glaucoma disease, 0 - non-glaucoma, 1 - glaucoma oct_bscans: images of OCT B-scans race: 0 - Asian, 1 - Black, 2 - White male: 0 - Female, 1 - Male hispanic: 0 - Non-Hispanic, 1 - Hispanic maritalstatus: 0 - Married or Partnered, 1 - Single, 2 - Divorced, 3 - Widowed, 4 - Legally Separated, and -1 - Unknown language: 0 - English, 1 - Spanish, 2 - Others

We have compiled all clinical measurements related to the 10,000 samples into a meta CSV file named data_summary.csv. Specifically, the cup-disc ratio, severity of vision loss, and status of spherical equivalent are denoted by the column names 'cdrstatus', 'mdseverity', and 'sestatus', respectively, in the datasummary.csv file.

Experiments

Train Stable Diffusion cd examples/text_to_image export MODEL_NAME="stabilityai/stable-diffusion-2-1" accelerate launch --main_process_port 29601 --multi_gpu --num_processes 2 --mixed_precision="fp16" train_text_to_image.py --text_encoder_type clip --dataset_dir <DATASET_DIR> --checkpointing_steps 5000 --pretrained_model_name_or_path=$MODEL_NAME --train_data_dir tmpp --datasets glaucoma --race_prompt --gender_prompt --ethnicity_prompt --use_ema --resolution=512 --center_crop --random_flip --train_batch_size=16 --gradient_accumulation_steps=1 --gradient_checkpointing --max_train_steps=100000 --learning_rate=1e-05 --max_grad_norm=1 --lr_scheduler="constant" --lr_warmup_steps=0 --output_dir=<OUTPUT_DIR> --cache_dir <CACHE_DIR>

Train FairDiffusion cd examples/text_to_image export MODEL_NAME="stabilityai/fair-diffusion-2-1" export TIME_WINDOW=30 export EXPLOITATION=0.95 accelerate launch --main_process_port 29601 --multi_gpu --num_processes 2 --mixed_precision="fp16" train_text_to_image.py --text_encoder_type clip --dataset_dir <DATASET_DIR> --checkpointing_steps 5000 --pretrained_model_name_or_path=$MODEL_NAME --train_data_dir tmpp --datasets glaucoma --race_prompt --gender_prompt --ethnicity_prompt --use_ema --resolution=512 --center_crop --random_flip --train_batch_size=16 --gradient_accumulation_steps=1 --gradient_checkpointing --max_train_steps=100000 --learning_rate=1e-05 --max_grad_norm=1 --lr_scheduler="constant" --lr_warmup_steps=0 --output_dir=<OUTPUT_DIR> --cache_dir <CACHE_DIR> --fair_time_window ${TIME_WINDOW} --fair_exploitation_rate ${EXPLOITATION}

Visualize Diffusion Model (Generation) python visualize_fairdiffusion.py --dataset_dir <DATASET_DIR> --initial_model stabilityai/stable-diffusion-2-1 --model_path <OUTPUT_DIR>/checkpoint-<TBD>/unet --datasets glaucoma --race_prompt --gender_prompt --ethnicity_prompt --vis_dir <VIS_DIR> --prompts prompts.txt --repeat_prompt 5

Evaluate Fairness of Diffusion Model (Generation) python evaluate_fairdiffusion.py --metrics_calculation_idx 1 --dataset_dir <DATASET_DIR> --initial_model stabilityai/stable-diffusion-2-1 --model_path <OUTPUT_DIR>/checkpoint-<TBD>/unet --datasets glaucoma --race_prompt --gender_prompt --ethnicity_prompt > OUTPUT.txt

Evaluate Fairness of Diffusion Model (Classification) - ViT-B ``` cd classificationcodebase DATASETDIR= RESULTDIR= MODELTYPE=ViT-B MODALITYTYPE=slofundus

VITWEIGHTS=imagenet BATCHSIZE=64 BLR=5e-4 WD=0.01 LD=0.55 DP=0.1

Baselines

EXPNAME=BASELINEGLAUCOMA PERFFILE=${MODELTYPE}${MODALITYTYPE}${EXPNAME}.csv python scripts/trainglaucomafair.py --task cls --epochs 50 --batchsize ${BATCHSIZE} --blr ${BLR} --minlr 1e-6 --warmupepochs 5 --weightdecay ${WD} --layerdecay ${LD} --droppath ${DP} --datadir ${DATASETDIR}/ --resultdir ${RESULTDIR}/${MODELTYPE}${MODALITYTYPE}${EXPNAME} --modeltype ${MODELTYPE} --modalitytypes ${MODALITYTYPE} --perffile ${PERFFILE} --vitweights ${VITWEIGHTS}

EXPNAME=BASELINEMDSEVERITY PERFFILE=${MODELTYPE}${MODALITYTYPE}${EXPNAME}.csv python scripts/trainglaucomafair.py --task mdseverity --epochs 50 --batchsize ${BATCHSIZE} --blr ${BLR} --minlr 1e-6 --warmupepochs 5 --weightdecay ${WD} --layerdecay ${LD} --droppath ${DP} --datadir ${DATASETDIR}/ --resultdir ${RESULTDIR}/${MODELTYPE}${MODALITYTYPE}${EXPNAME} --modeltype ${MODELTYPE} --modalitytypes ${MODALITYTYPE} --perffile ${PERFFILE} --vitweights ${VITWEIGHTS}

EXPNAME=BASELINECDRSTATUS PERFFILE=${MODELTYPE}${MODALITYTYPE}${EXPNAME}.csv python scripts/trainglaucomafair.py --task cdrstatus --epochs 50 --batchsize ${BATCHSIZE} --blr ${BLR} --minlr 1e-6 --warmupepochs 5 --weightdecay ${WD} --layerdecay ${LD} --droppath ${DP} --datadir ${DATASETDIR}/ --resultdir ${RESULTDIR}/${MODELTYPE}${MODALITYTYPE}${EXPNAME} --modeltype ${MODELTYPE} --modalitytypes ${MODALITYTYPE} --perffile ${PERFFILE} --vitweights ${VITWEIGHTS}

EXPNAME=BASELINESESTATUS PERFFILE=${MODELTYPE}${MODALITYTYPE}${EXPNAME}.csv python scripts/trainglaucomafair.py --task sestatus --epochs 50 --batchsize ${BATCHSIZE} --blr ${BLR} --minlr 1e-6 --warmupepochs 5 --weightdecay ${WD} --layerdecay ${LD} --droppath ${DP} --datadir ${DATASETDIR}/ --resultdir ${RESULTDIR}/${MODELTYPE}${MODALITYTYPE}${EXPNAME} --modeltype ${MODELTYPE} --modalitytypes ${MODALITYTYPE} --perffile ${PERFFILE} --vitweights ${VITWEIGHTS}

Generated Dataset

EXPNAME=FAIRDIFFUSIONGLAUCOMA PERFFILE=${MODELTYPE}${MODALITYTYPE}${EXPNAME}.csv python scripts/trainglaucomafair.py --task cls --fairdiffusion --initialmodel stabilityai/stable-diffusion-2-1 --modelpath /checkpoint-/unet --epochs 50 --batchsize ${BATCHSIZE} --blr ${BLR} --minlr 1e-6 --warmupepochs 5 --weightdecay ${WD} --layerdecay ${LD} --droppath ${DP} --datadir ${DATASETDIR}/ --resultdir ${RESULTDIR}/${MODELTYPE}${MODALITYTYPE}${EXPNAME} --modeltype ${MODELTYPE} --modalitytypes ${MODALITYTYPE} --perffile ${PERFFILE} --vitweights ${VITWEIGHTS}

EXPNAME=FAIRDIFFUSIONMDSEVERITY PERFFILE=${MODELTYPE}${MODALITYTYPE}${EXPNAME}.csv python scripts/trainglaucomafair.py --task mdseverity --fairdiffusion --initialmodel stabilityai/stable-diffusion-2-1 --modelpath /checkpoint-/unet --epochs 50 --batchsize ${BATCHSIZE} --blr ${BLR} --minlr 1e-6 --warmupepochs 5 --weightdecay ${WD} --layerdecay ${LD} --droppath ${DP} --datadir ${DATASETDIR}/ --resultdir ${RESULTDIR}/${MODELTYPE}${MODALITYTYPE}${EXPNAME} --modeltype ${MODELTYPE} --modalitytypes ${MODALITYTYPE} --perffile ${PERFFILE} --vitweights ${VITWEIGHTS}

EXPNAME=FAIRDIFFUSIONCDRSTATUS PERFFILE=${MODELTYPE}${MODALITYTYPE}${EXPNAME}.csv python scripts/trainglaucomafair.py --task cdrstatus --fairdiffusion --initialmodel stabilityai/stable-diffusion-2-1 --modelpath /checkpoint-/unet --epochs 50 --batchsize ${BATCHSIZE} --blr ${BLR} --minlr 1e-6 --warmupepochs 5 --weightdecay ${WD} --layerdecay ${LD} --droppath ${DP} --datadir ${DATASETDIR}/ --resultdir ${RESULTDIR}/${MODELTYPE}${MODALITYTYPE}${EXPNAME} --modeltype ${MODELTYPE} --modalitytypes ${MODALITYTYPE} --perffile ${PERFFILE} --vitweights ${VITWEIGHTS}

EXPNAME=FAIRDIFFUSIONSESTATUS PERFFILE=${MODELTYPE}${MODALITYTYPE}${EXPNAME}.csv python scripts/trainglaucomafair.py --task sestatus --fairdiffusion --initialmodel stabilityai/stable-diffusion-2-1 --modelpath /checkpoint-/unet --epochs 50 --batchsize ${BATCHSIZE} --blr ${BLR} --minlr 1e-6 --warmupepochs 5 --weightdecay ${WD} --layerdecay ${LD} --droppath ${DP} --datadir ${DATASETDIR}/ --resultdir ${RESULTDIR}/${MODELTYPE}${MODALITYTYPE}${EXPNAME} --modeltype ${MODELTYPE} --modalitytypes ${MODALITYTYPE} --perffile ${PERFFILE} --vitweights ${VITWEIGHTS}

```

Evaluate Fairness of Diffusion Model (Classification) - EfficientNet ``` cd classification_codebase

DATASETDIR= RESULTDIR= MODELTYPE=( efficientnet ) NUMEPOCH=10 MODALITYTYPE='slofundus' ATTRIBUTE_TYPE=race

OPTIMIZER='adamw' OPTIMIZERARGUMENTS='{"lr": 0.001, "weightdecay": 0.01}'

SCHEDULER='steplr' SCHEDULERARGUMENTS='{"step_size": 30, "gamma": 0.1}'

LR=1e-3

Baselines

BATCHSIZE=6 EXPNAME=BASELINEGLAUCOMA PERFFILE=${MODELTYPE}${MODALITYTYPE}${EXPNAME}.csv python scripts/trainglaucomafair.py --task cls --datadir ${DATASETDIR}/ --resultdir ${RESULTDIR}/${MODELTYPE}${MODALITYTYPE}${EXPNAME} --modeltype ${MODELTYPE} --imagesize 200 --lr ${LR} --weight-decay 0. --momentum 0.1 --batchsize ${BATCHSIZE} --epochs ${NUMEPOCH} --modalitytypes ${MODALITYTYPE} --perffile ${PERFFILE}

BATCHSIZE=6 EXPNAME=BASELINEMDSEVERITY PERFFILE=${MODELTYPE}${MODALITYTYPE}${EXPNAME}.csv python scripts/trainglaucomafair.py --task mdseverity --datadir ${DATASETDIR}/ --resultdir ${RESULTDIR}/${MODELTYPE}${MODALITYTYPE}${EXPNAME} --modeltype ${MODELTYPE} --imagesize 200 --lr ${LR} --weight-decay 0. --momentum 0.1 --batchsize ${BATCHSIZE} --epochs ${NUMEPOCH} --modalitytypes ${MODALITYTYPE} --perffile ${PERFFILE}

BATCHSIZE=16 EXPNAME=BASELINECDRSTATUS PERFFILE=${MODELTYPE}${MODALITYTYPE}${EXPNAME}.csv python scripts/trainglaucomafair.py --task cdrstatus --datadir ${DATASETDIR}/ --resultdir ${RESULTDIR}/${MODELTYPE}${MODALITYTYPE}${EXPNAME} --modeltype ${MODELTYPE} --imagesize 200 --lr ${LR} --weight-decay 0. --momentum 0.1 --batchsize ${BATCHSIZE} --epochs ${NUMEPOCH} --modalitytypes ${MODALITYTYPE} --perffile ${PERFFILE}

BATCHSIZE=16 EXPNAME=BASELINESESTATUS PERFFILE=${MODELTYPE}${MODALITYTYPE}${EXPNAME}.csv python scripts/trainglaucomafair.py --task sestatus --datadir ${DATASETDIR}/ --resultdir ${RESULTDIR}/${MODELTYPE}${MODALITYTYPE}${EXPNAME} --modeltype ${MODELTYPE} --imagesize 200 --lr ${LR} --weight-decay 0. --momentum 0.1 --batchsize ${BATCHSIZE} --epochs ${NUMEPOCH} --modalitytypes ${MODALITYTYPE} --perffile ${PERFFILE}

Generated Dataset

BATCHSIZE=6 EXPNAME=FAIRDIFFUSIONGLAUCOMA PERFFILE=${MODELTYPE}${MODALITYTYPE}${EXPNAME}.csv python scripts/trainglaucomafair.py --task cls --fairdiffusion --initialmodel stabilityai/stable-diffusion-2-1 --modelpath <OUTPUTDIR>/checkpoint-/unet --datadir ${DATASETDIR}/ --resultdir ${RESULTDIR}/${MODELTYPE}${MODALITYTYPE}${EXPNAME} --modeltype ${MODELTYPE} --imagesize 200 --lr ${LR} --weight-decay 0. --momentum 0.1 --batchsize ${BATCHSIZE} --epochs ${NUMEPOCH} --modalitytypes ${MODALITYTYPE} --perffile ${PERF_FILE}

BATCHSIZE=6 EXPNAME=FAIRDIFFUSIONMDSEVERITY PERFFILE=${MODELTYPE}${MODALITYTYPE}${EXPNAME}.csv python scripts/trainglaucomafair.py --task mdseverity --fairdiffusion --initialmodel stabilityai/stable-diffusion-2-1 --modelpath <OUTPUTDIR>/checkpoint-/unet --datadir ${DATASETDIR}/ --resultdir ${RESULTDIR}/${MODELTYPE}${MODALITYTYPE}${EXPNAME} --modeltype ${MODELTYPE} --imagesize 200 --lr ${LR} --weight-decay 0. --momentum 0.1 --batchsize ${BATCHSIZE} --epochs ${NUMEPOCH} --modalitytypes ${MODALITYTYPE} --perffile ${PERF_FILE}

BATCHSIZE=16 EXPNAME=FAIRDIFFUSIONCDRSTATUS PERFFILE=${MODELTYPE}${MODALITYTYPE}${EXPNAME}.csv python scripts/trainglaucomafair.py --task cdrstatus --fairdiffusion --initialmodel stabilityai/stable-diffusion-2-1 --modelpath <OUTPUTDIR>/checkpoint-/unet --datadir ${DATASETDIR}/ --resultdir ${RESULTDIR}/${MODELTYPE}${MODALITYTYPE}${EXPNAME} --modeltype ${MODELTYPE} --imagesize 200 --lr ${LR} --weight-decay 0. --momentum 0.1 --batchsize ${BATCHSIZE} --epochs ${NUMEPOCH} --modalitytypes ${MODALITYTYPE} --perffile ${PERF_FILE}

BATCHSIZE=16 EXPNAME=FAIRDIFFUSIONSESTATUS PERFFILE=${MODELTYPE}${MODALITYTYPE}${EXPNAME}.csv python scripts/trainglaucomafair.py --task sestatus --fairdiffusion --initialmodel stabilityai/stable-diffusion-2-1 --modelpath <OUTPUTDIR>/checkpoint-/unet --datadir ${DATASETDIR}/ --resultdir ${RESULTDIR}/${MODELTYPE}${MODALITYTYPE}${EXPNAME} --modeltype ${MODELTYPE} --imagesize 200 --lr ${LR} --weight-decay 0. --momentum 0.1 --batchsize ${BATCHSIZE} --epochs ${NUMEPOCH} --modalitytypes ${MODALITYTYPE} --perffile ${PERF_FILE}

```

Citation (CITATION.cff)

cff-version: 1.2.0
title: 'Diffusers: State-of-the-art diffusion models'
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Patrick
    family-names: von Platen
  - given-names: Suraj
    family-names: Patil
  - given-names: Anton
    family-names: Lozhkov
  - given-names: Pedro
    family-names: Cuenca
  - given-names: Nathan
    family-names: Lambert
  - given-names: Kashif
    family-names: Rasul
  - given-names: Mishig
    family-names: Davaadorj
  - given-names: Thomas
    family-names: Wolf
repository-code: 'https://github.com/huggingface/diffusers'
abstract: >-
  Diffusers provides pretrained diffusion models across
  multiple modalities, such as vision and audio, and serves
  as a modular toolbox for inference and training of
  diffusion models.
keywords:
  - deep-learning
  - pytorch
  - image-generation
  - hacktoberfest
  - diffusion
  - text2image
  - image2image
  - score-based-generative-modeling
  - stable-diffusion
  - stable-diffusion-diffusers
license: Apache-2.0
version: 0.12.1

GitHub Events

Total
Last Year

Dependencies

docker/diffusers-flax-cpu/Dockerfile docker
  • ubuntu 20.04 build
docker/diffusers-flax-tpu/Dockerfile docker
  • ubuntu 20.04 build
docker/diffusers-onnxruntime-cpu/Dockerfile docker
  • ubuntu 20.04 build
docker/diffusers-onnxruntime-cuda/Dockerfile docker
  • nvidia/cuda 11.6.2-cudnn8-devel-ubuntu20.04 build
docker/diffusers-pytorch-compile-cuda/Dockerfile docker
  • nvidia/cuda 12.1.0-runtime-ubuntu20.04 build
docker/diffusers-pytorch-cpu/Dockerfile docker
  • ubuntu 20.04 build
docker/diffusers-pytorch-cuda/Dockerfile docker
  • nvidia/cuda 12.1.0-runtime-ubuntu20.04 build
docker/diffusers-pytorch-xformers-cuda/Dockerfile docker
  • nvidia/cuda 12.1.0-runtime-ubuntu20.04 build
classification_codebase/requirements.txt pypi
  • blobfile >=1.3.3
  • fairlearn >=0.9.0
  • opencv-python *
  • pandas >=2.0.3
  • scikit-image >=0.19.3
  • scikit-learn >=1.1.2
  • torch >=2.1.0
  • torchvision >=0.15.2
environment.yml pypi
  • absl-py ==1.0.0
  • accelerate ==0.26.1
  • addict ==2.4.0
  • aiohttp ==3.8.4
  • aiosignal ==1.3.1
  • albumentations ==1.3.0
  • aliyun-python-sdk-core ==2.13.36
  • aliyun-python-sdk-kms ==2.16.1
  • antlr4-python3-runtime ==4.9.3
  • astunparse ==1.6.3
  • backoff ==2.2.1
  • black ==23.7.0
  • blessed ==1.20.0
  • cachetools ==5.0.0
  • cmake ==3.27.0
  • contextlib2 ==21.6.0
  • crcmod ==1.7
  • datasets ==2.16.1
  • diffdist ==0.1
  • dill ==0.3.7
  • einops ==0.6.1
  • et-xmlfile ==1.1.0
  • exceptiongroup ==1.1.2
  • fairlearn ==0.10.0
  • filelock ==3.12.2
  • flatbuffers ==23.5.26
  • frozenlist ==1.4.0
  • fsspec ==2023.9.2
  • ftfy ==6.1.1
  • future ==0.18.3
  • fvcore ==0.1.5.post20221221
  • gast ==0.4.0
  • git-lfs ==1.6
  • google-auth ==2.22.0
  • google-auth-oauthlib ==1.0.0
  • google-pasta ==0.2.0
  • gputil ==1.4.0
  • grpcio ==1.56.0
  • h11 ==0.14.0
  • h5py ==3.9.0
  • huggingface-hub ==0.20.2
  • hydra-core ==1.3.2
  • importlib-metadata ==6.8.0
  • iopath ==0.1.9
  • ipdb ==0.13.9
  • jmespath ==0.10.0
  • joblib ==1.3.2
  • jupyter-contrib-core ==0.4.2
  • jupyter-contrib-nbextensions ==0.7.0
  • jupyter-highlight-selected-word ==0.2.0
  • jupyter-nbextensions-configurator ==0.6.3
  • keras ==2.13.1
  • kornia ==0.6.0
  • libauc ==1.1.8
  • libclang ==16.0.0
  • lightning-utilities ==0.10.1
  • lit ==16.0.6
  • loguru ==0.7.2
  • markdown ==3.3.6
  • markdown-it-py ==3.0.0
  • mdurl ==0.1.2
  • medpy ==0.4.0
  • ml-collections ==0.1.0
  • mmcv ==2.0.1
  • mmcv-full ==1.7.1
  • mmengine ==0.8.4
  • model-index ==0.1.11
  • mpmath ==1.3.0
  • multidict ==6.0.4
  • multiprocess ==0.70.15
  • mypy-extensions ==1.0.0
  • nltk ==3.8.1
  • numpy ==1.26.3
  • nvidia-cublas-cu11 ==11.10.3.66
  • nvidia-cublas-cu12 ==12.1.3.1
  • nvidia-cuda-cupti-cu11 ==11.7.101
  • nvidia-cuda-cupti-cu12 ==12.1.105
  • nvidia-cuda-nvrtc-cu11 ==11.7.99
  • nvidia-cuda-nvrtc-cu12 ==12.1.105
  • nvidia-cuda-runtime-cu11 ==11.7.99
  • nvidia-cuda-runtime-cu12 ==12.1.105
  • nvidia-cudnn-cu11 ==8.5.0.96
  • nvidia-cudnn-cu12 ==8.9.2.26
  • nvidia-cufft-cu11 ==10.9.0.58
  • nvidia-cufft-cu12 ==11.0.2.54
  • nvidia-curand-cu11 ==10.2.10.91
  • nvidia-curand-cu12 ==10.3.2.106
  • nvidia-cusolver-cu12 ==11.4.5.107
  • nvidia-cusparse-cu11 ==11.7.4.91
  • nvidia-cusparse-cu12 ==12.1.0.106
  • nvidia-nccl-cu11 ==2.14.3
  • nvidia-nccl-cu12 ==2.18.1
  • nvidia-nvjitlink-cu12 ==12.3.101
  • nvidia-nvtx-cu11 ==11.7.91
  • nvidia-nvtx-cu12 ==12.1.105
  • oauthlib ==3.2.0
  • objaverse ==0.1.5
  • omegaconf ==2.3.0
  • open-clip-torch ==2.0.2
  • opencv-python ==4.6.0.66
  • opencv-python-headless ==4.6.0.66
  • opendatalab ==0.0.10
  • openmim ==0.3.9
  • openpyxl ==3.1.2
  • openxlab ==0.0.16
  • opt-einsum ==3.3.0
  • optree ==0.9.1
  • ordered-set ==4.1.0
  • oss2 ==2.17.0
  • packaging ==23.1
  • pandas ==2.2.0
  • pathspec ==0.11.1
  • pip ==23.2.1
  • platformdirs ==3.8.1
  • portalocker ==2.7.0
  • prettytable ==3.8.0
  • protobuf ==4.23.4
  • pyarrow ==13.0.0
  • pyarrow-hotfix ==0.6
  • pyasn1 ==0.4.8
  • pyasn1-modules ==0.2.8
  • pycocotools ==2.0.6
  • pycryptodome ==3.18.0
  • pydeprecate ==0.3.1
  • pydicom ==2.3.0
  • pygments ==2.15.1
  • pyjwt ==2.8.0
  • pyre-extensions ==0.0.23
  • python-editor ==1.0.4
  • python-graphviz ==0.20.1
  • python-multipart ==0.0.6
  • pytorch-lightning ==1.4.2
  • pytz ==2023.3
  • qudida ==0.0.4
  • readchar ==4.0.5
  • regex ==2023.6.3
  • requests ==2.28.2
  • requests-oauthlib ==1.3.1
  • rich ==13.4.2
  • rsa ==4.8
  • safetensors ==0.3.3
  • scikit-learn ==1.4.0
  • scipy ==1.12.0
  • seaborn ==0.12.2
  • setuptools ==60.2.0
  • simpleitk ==2.2.1
  • sniffio ==1.3.0
  • stable-diffusion-sdkit ==2.1.3
  • sympy ==1.12
  • tabulate ==0.9.0
  • tensorboard ==2.13.0
  • tensorboard-data-server ==0.7.1
  • tensorboard-plugin-wit ==1.8.1
  • tensorboardx ==2.6.2
  • tensorflow ==2.13.0
  • tensorflow-estimator ==2.13.0
  • tensorflow-hub ==0.14.0
  • tensorflow-io-gcs-filesystem ==0.32.0
  • termcolor ==2.3.0
  • test-tube ==0.7.5
  • timm ==0.9.7
  • tokenizers ==0.15.0
  • toml ==0.10.2
  • torch ==2.1.2
  • torch-fidelity ==0.3.0
  • torchaudio ==2.1.2
  • torchmetrics ==1.3.0.post0
  • torchvision ==0.16.2
  • tqdm ==4.65.0
  • transformers ==4.36.2
  • triton ==2.1.0
  • typing-inspect ==0.9.0
  • tzdata ==2023.4
  • uvicorn ==0.23.1
  • websocket-client ==1.6.1
  • websockets ==11.0.3
  • werkzeug ==2.1.1
  • xformers ==0.0.23.post1
  • yacs ==0.1.8
  • yapf ==0.40.1
  • yarl ==1.9.2
  • zipp ==3.8.0
examples/consistency_distillation/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
  • webdataset *
examples/controlnet/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/controlnet/requirements_flax.txt pypi
  • Jinja2 *
  • datasets *
  • flax *
  • ftfy *
  • optax *
  • tensorboard *
  • torch *
  • torchvision *
  • transformers >=4.25.1
examples/controlnet/requirements_sdxl.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
  • wandb *
examples/custom_diffusion/requirements.txt pypi
  • Jinja2 *
  • accelerate *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/dreambooth/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • peft ==0.7.0
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/dreambooth/requirements_flax.txt pypi
  • Jinja2 *
  • flax *
  • ftfy *
  • optax *
  • tensorboard *
  • torch *
  • torchvision *
  • transformers >=4.25.1
examples/dreambooth/requirements_sdxl.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • peft ==0.7.0
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/instruct_pix2pix/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/kandinsky2_2/text_to_image/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/colossalai/requirement.txt pypi
  • Jinja2 *
  • diffusers *
  • ftfy *
  • tensorboard *
  • torch *
  • torchvision *
  • transformers *
examples/research_projects/diffusion_dpo/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • peft *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
  • wandb *
examples/research_projects/dreambooth_inpaint/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • diffusers ==0.9.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.21.0
examples/research_projects/intel_opts/textual_inversion/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • intel_extension_for_pytorch >=1.13
  • tensorboard *
  • torchvision *
  • transformers >=4.21.0
examples/research_projects/intel_opts/textual_inversion_dfq/requirements.txt pypi
  • accelerate *
  • ftfy *
  • modelcards *
  • neural-compressor *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.0
examples/research_projects/lora/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/multi_subject_dreambooth/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/multi_subject_dreambooth_inpainting/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets >=2.16.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
  • wandb >=0.16.1
examples/research_projects/multi_token_textual_inversion/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/multi_token_textual_inversion/requirements_flax.txt pypi
  • Jinja2 *
  • flax *
  • ftfy *
  • optax *
  • tensorboard *
  • torch *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/onnxruntime/text_to_image/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • modelcards *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/onnxruntime/textual_inversion/requirements.txt pypi
  • accelerate >=0.16.0
  • ftfy *
  • modelcards *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/onnxruntime/unconditional_image_generation/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • tensorboard *
  • torchvision *
examples/research_projects/realfill/requirements.txt pypi
  • Jinja2 ==3.1.3
  • accelerate ==0.23.0
  • diffusers ==0.20.1
  • ftfy ==6.1.1
  • peft ==0.5.0
  • tensorboard ==2.14.0
  • torch ==2.0.1
  • torchvision >=0.16
  • transformers ==4.36.0
examples/t2i_adapter/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • safetensors *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
  • wandb *
examples/text_to_image/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • peft ==0.7.0
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/text_to_image/requirements_flax.txt pypi
  • Jinja2 *
  • datasets *
  • flax *
  • ftfy *
  • optax *
  • tensorboard *
  • torch *
  • torchvision *
  • transformers >=4.25.1
examples/text_to_image/requirements_sdxl.txt pypi
  • Jinja2 *
  • accelerate >=0.22.0
  • datasets *
  • ftfy *
  • peft ==0.7.0
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/textual_inversion/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/textual_inversion/requirements_flax.txt pypi
  • Jinja2 *
  • flax *
  • ftfy *
  • optax *
  • tensorboard *
  • torch *
  • torchvision *
  • transformers >=4.25.1
examples/unconditional_image_generation/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • torchvision *
examples/wuerstchen/text_to_image/requirements.txt pypi
  • accelerate >=0.16.0
  • bitsandbytes *
  • deepspeed *
  • huggingface-cli *
  • peft >=0.6.0
  • torchvision *
  • transformers >=4.25.1
  • wandb *
pyproject.toml pypi
requirements.txt pypi
  • blobfile *
  • botorch *
  • datasets ==2.16.1
  • diffusers ==0.26.0
  • open_clip_torch *
  • scikit-learn ==1.4.0
  • scipy ==1.12.0
  • tensorboard ==2.13.0
  • timm ==0.4.12
  • torch *
  • torch-fidelity ==0.3.0
  • torchaudio *
  • torchmetrics ==1.3.0.post0
  • torchvision *
  • transformers ==4.36.2
setup.py pypi
  • deps *