motionaura-iclr-2025

Official source code of "MotionAura: Generating High-Quality and Motion Consistent Videos using Discrete Diffusion", published in ICLR 2025

https://github.com/candlelabai/motionaura-iclr-2025

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (5.5%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Official source code of "MotionAura: Generating High-Quality and Motion Consistent Videos using Discrete Diffusion", published in ICLR 2025

Basic Info
  • Host: GitHub
  • Owner: CandleLabAI
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Size: 8.05 MB
Statistics
  • Stars: 2
  • Watchers: 1
  • Forks: 0
  • Open Issues: 1
  • Releases: 0
Created about 1 year ago · Last pushed about 1 year ago
Metadata Files
Readme Contributing License Code of conduct Citation

README.md

MotionAura-ICLR-2025

Official source code of "MotionAura: Generating High-Quality and Motion Consistent Videos using Discrete Diffusion", published in ICLR 2025 (Spotlight)

Paper

You can read the full paper here.

Installation

Prerequisites

  • Python 3.8 or higher
  • CUDA 11.8 or higher (for GPU support)
  • PyTorch 2.0.0 or higher

Clone the Repository

```bash

git clone https://github.com/yourusername/MotionAura-ICLR-2025.git

cd MotionAura-ICLR-2025

conda create -n motionaura python=3.8

conda activate motionaura

pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu118

pip install -e .

Owner

  • Login: CandleLabAI
  • Kind: user

Citation (CITATION.cff)

cff-version: 1.2.0
title: 'Diffusers: State-of-the-art diffusion models'
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Patrick
    family-names: von Platen
  - given-names: Suraj
    family-names: Patil
  - given-names: Anton
    family-names: Lozhkov
  - given-names: Pedro
    family-names: Cuenca
  - given-names: Nathan
    family-names: Lambert
  - given-names: Kashif
    family-names: Rasul
  - given-names: Mishig
    family-names: Davaadorj
  - given-names: Dhruv
    family-names: Nair
  - given-names: Sayak
    family-names: Paul
  - given-names: Steven
    family-names: Liu
  - given-names: William
    family-names: Berman
  - given-names: Yiyi
    family-names: Xu
  - given-names: Thomas
    family-names: Wolf
repository-code: 'https://github.com/huggingface/diffusers'
abstract: >-
  Diffusers provides pretrained diffusion models across
  multiple modalities, such as vision and audio, and serves
  as a modular toolbox for inference and training of
  diffusion models.
keywords:
  - deep-learning
  - pytorch
  - image-generation
  - hacktoberfest
  - diffusion
  - text2image
  - image2image
  - score-based-generative-modeling
  - stable-diffusion
  - stable-diffusion-diffusers
license: Apache-2.0
version: 0.12.1

GitHub Events

Total
  • Issues event: 2
  • Watch event: 7
  • Issue comment event: 2
  • Member event: 2
  • Push event: 4
  • Fork event: 1
  • Create event: 2
Last Year
  • Issues event: 2
  • Watch event: 7
  • Issue comment event: 2
  • Member event: 2
  • Push event: 4
  • Fork event: 1
  • Create event: 2

Dependencies

setup.py pypi
  • deps *
docker/diffusers-doc-builder/Dockerfile docker
  • ubuntu 20.04 build
docker/diffusers-flax-cpu/Dockerfile docker
  • ubuntu 20.04 build
docker/diffusers-flax-tpu/Dockerfile docker
  • ubuntu 20.04 build
docker/diffusers-onnxruntime-cpu/Dockerfile docker
  • ubuntu 20.04 build
docker/diffusers-onnxruntime-cuda/Dockerfile docker
  • nvidia/cuda 12.1.0-runtime-ubuntu20.04 build
docker/diffusers-pytorch-compile-cuda/Dockerfile docker
  • nvidia/cuda 12.1.0-runtime-ubuntu20.04 build
docker/diffusers-pytorch-cpu/Dockerfile docker
  • ubuntu 20.04 build
docker/diffusers-pytorch-cuda/Dockerfile docker
  • nvidia/cuda 12.1.0-runtime-ubuntu20.04 build
docker/diffusers-pytorch-minimum-cuda/Dockerfile docker
  • nvidia/cuda 12.1.0-runtime-ubuntu20.04 build
docker/diffusers-pytorch-xformers-cuda/Dockerfile docker
  • nvidia/cuda 12.1.0-runtime-ubuntu20.04 build
examples/advanced_diffusion_training/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • peft ==0.7.0
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/advanced_diffusion_training/requirements_flux.txt pypi
  • Jinja2 *
  • accelerate >=0.31.0
  • ftfy *
  • peft >=0.11.1
  • sentencepiece *
  • tensorboard *
  • torchvision *
  • transformers >=4.41.2
examples/cogvideo/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.31.0
  • decord >=0.6.0
  • ftfy *
  • imageio-ffmpeg *
  • peft >=0.11.1
  • sentencepiece *
  • tensorboard *
  • torchvision *
  • transformers >=4.41.2
examples/consistency_distillation/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
  • webdataset *
examples/controlnet/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/controlnet/requirements_flax.txt pypi
  • Jinja2 *
  • datasets *
  • flax *
  • ftfy *
  • optax *
  • tensorboard *
  • torch *
  • torchvision *
  • transformers >=4.25.1
examples/controlnet/requirements_flux.txt pypi
  • Jinja2 *
  • SentencePiece *
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
  • wandb *
examples/controlnet/requirements_sd3.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
  • wandb *
examples/controlnet/requirements_sdxl.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
  • wandb *
examples/custom_diffusion/requirements.txt pypi
  • Jinja2 *
  • accelerate *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/dreambooth/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • peft ==0.7.0
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/dreambooth/requirements_flax.txt pypi
  • Jinja2 *
  • flax *
  • ftfy *
  • optax *
  • tensorboard *
  • torch *
  • torchvision *
  • transformers >=4.25.1
examples/dreambooth/requirements_flux.txt pypi
  • Jinja2 *
  • accelerate >=0.31.0
  • ftfy *
  • peft >=0.11.1
  • sentencepiece *
  • tensorboard *
  • torchvision *
  • transformers >=4.41.2
examples/dreambooth/requirements_sana.txt pypi
  • Jinja2 *
  • accelerate >=1.0.0
  • ftfy *
  • peft >=0.14.0
  • sentencepiece *
  • tensorboard *
  • torchvision *
  • transformers >=4.47.0
examples/dreambooth/requirements_sd3.txt pypi
  • Jinja2 *
  • accelerate >=0.31.0
  • ftfy *
  • peft ==0.11.1
  • sentencepiece *
  • tensorboard *
  • torchvision *
  • transformers >=4.41.2
examples/dreambooth/requirements_sdxl.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • peft ==0.7.0
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/flux-control/requirements.txt pypi
  • accelerate ==1.2.0
  • peft >=0.14.0
  • torch *
  • torchvision *
  • transformers ==4.47.0
  • wandb *
examples/instruct_pix2pix/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/kandinsky2_2/text_to_image/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/model_search/requirements.txt pypi
  • huggingface-hub >=0.26.2
examples/research_projects/autoencoderkl/requirements.txt pypi
  • Pillow *
  • accelerate >=0.16.0
  • bitsandbytes *
  • datasets *
  • huggingface_hub *
  • lpips *
  • numpy *
  • packaging *
  • taming_transformers *
  • torch *
  • torchvision *
  • tqdm *
  • transformers *
  • wandb *
  • xformers *
examples/research_projects/colossalai/requirement.txt pypi
  • Jinja2 *
  • diffusers *
  • ftfy *
  • tensorboard *
  • torch *
  • torchvision *
  • transformers *
examples/research_projects/consistency_training/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/diffusion_dpo/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • peft *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
  • wandb *
examples/research_projects/diffusion_orpo/requirements.txt pypi
  • accelerate *
  • datasets *
  • peft *
  • torchvision *
  • transformers *
  • wandb *
  • webdataset *
examples/research_projects/dreambooth_inpaint/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • diffusers ==0.9.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.21.0
examples/research_projects/gligen/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • diffusers *
  • fairscale *
  • ftfy *
  • scipy *
  • tensorboard *
  • timm *
  • torchvision *
  • transformers >=4.25.1
  • wandb *
examples/research_projects/intel_opts/textual_inversion/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • intel_extension_for_pytorch >=1.13
  • tensorboard *
  • torchvision *
  • transformers >=4.21.0
examples/research_projects/intel_opts/textual_inversion_dfq/requirements.txt pypi
  • accelerate *
  • ftfy *
  • modelcards *
  • neural-compressor *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.0
examples/research_projects/ip_adapter/requirements.txt pypi
  • accelerate *
  • ip_adapter *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/lora/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/multi_subject_dreambooth/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/multi_subject_dreambooth_inpainting/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets >=2.16.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
  • wandb >=0.16.1
examples/research_projects/multi_token_textual_inversion/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/multi_token_textual_inversion/requirements_flax.txt pypi
  • Jinja2 *
  • flax *
  • ftfy *
  • optax *
  • tensorboard *
  • torch *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/onnxruntime/text_to_image/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • modelcards *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/onnxruntime/textual_inversion/requirements.txt pypi
  • accelerate >=0.16.0
  • ftfy *
  • modelcards *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/onnxruntime/unconditional_image_generation/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • tensorboard *
  • torchvision *
examples/research_projects/pixart/requirements.txt pypi
  • SentencePiece *
  • controlnet-aux *
  • datasets *
  • torchvision *
  • transformers *
examples/research_projects/pytorch_xla/training/text_to_image/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets >=2.19.1
  • ftfy *
  • peft ==0.7.0
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/research_projects/realfill/requirements.txt pypi
  • Jinja2 ==3.1.5
  • accelerate ==0.23.0
  • diffusers ==0.20.1
  • ftfy ==6.1.1
  • peft ==0.5.0
  • tensorboard ==2.14.0
  • torch ==2.2.0
  • torchvision >=0.16
  • transformers ==4.38.0
examples/research_projects/wuerstchen/text_to_image/requirements.txt pypi
  • accelerate >=0.16.0
  • bitsandbytes *
  • deepspeed *
  • peft >=0.6.0
  • torchvision *
  • transformers >=4.25.1
  • wandb *
examples/server/requirements.in pypi
  • aiohttp *
  • fastapi *
  • prometheus-fastapi-instrumentator >=7.0.0
  • prometheus_client >=0.18.0
  • py-consul *
  • sentencepiece *
  • torch *
  • transformers ==4.46.1
  • uvicorn *
examples/server/requirements.txt pypi
  • aiohappyeyeballs ==2.4.3
  • aiohttp ==3.10.10
  • aiosignal ==1.3.1
  • annotated-types ==0.7.0
  • anyio ==4.6.2.post1
  • attrs ==24.2.0
  • certifi ==2024.8.30
  • charset-normalizer ==3.4.0
  • click ==8.1.7
  • fastapi ==0.115.3
  • filelock ==3.16.1
  • frozenlist ==1.5.0
  • fsspec ==2024.10.0
  • h11 ==0.14.0
  • huggingface-hub ==0.26.1
  • idna ==3.10
  • jinja2 ==3.1.4
  • markupsafe ==3.0.2
  • mpmath ==1.3.0
  • multidict ==6.1.0
  • networkx ==3.4.2
  • numpy ==2.1.2
  • packaging ==24.1
  • prometheus-client ==0.21.0
  • prometheus-fastapi-instrumentator ==7.0.0
  • propcache ==0.2.0
  • py-consul ==1.5.3
  • pydantic ==2.9.2
  • pydantic-core ==2.23.4
  • pyyaml ==6.0.2
  • regex ==2024.9.11
  • requests ==2.32.3
  • safetensors ==0.4.5
  • sentencepiece ==0.2.0
  • sniffio ==1.3.1
  • starlette ==0.41.0
  • sympy ==1.13.3
  • tokenizers ==0.20.1
  • torch ==2.4.1
  • tqdm ==4.66.5
  • transformers ==4.46.1
  • typing-extensions ==4.12.2
  • urllib3 ==2.2.3
  • uvicorn ==0.32.0
  • yarl ==1.16.0
examples/t2i_adapter/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • ftfy *
  • safetensors *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
  • wandb *
examples/text_to_image/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • datasets >=2.19.1
  • ftfy *
  • peft ==0.7.0
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/text_to_image/requirements_flax.txt pypi
  • Jinja2 *
  • datasets *
  • flax *
  • ftfy *
  • optax *
  • tensorboard *
  • torch *
  • torchvision *
  • transformers >=4.25.1
examples/text_to_image/requirements_sdxl.txt pypi
  • Jinja2 *
  • accelerate >=0.22.0
  • datasets *
  • ftfy *
  • peft ==0.7.0
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/textual_inversion/requirements.txt pypi
  • Jinja2 *
  • accelerate >=0.16.0
  • ftfy *
  • tensorboard *
  • torchvision *
  • transformers >=4.25.1
examples/textual_inversion/requirements_flax.txt pypi
  • Jinja2 *
  • flax *
  • ftfy *
  • optax *
  • tensorboard *
  • torch *
  • torchvision *
  • transformers >=4.25.1
examples/unconditional_image_generation/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • torchvision *
examples/vqgan/requirements.txt pypi
  • accelerate >=0.16.0
  • datasets *
  • numpy *
  • tensorboard *
  • timm *
  • torchvision *
  • tqdm *
  • transformers >=4.25.1
pyproject.toml pypi
src/diffusers.egg-info/requires.txt pypi
  • GitPython <3.1.19
  • Jinja2 *
  • Pillow *
  • accelerate >=0.31.0
  • compel ==0.1.8
  • datasets *
  • filelock *
  • flax >=0.4.1
  • hf-doc-builder >=0.3.0
  • huggingface-hub >=0.27.0
  • importlib_metadata *
  • invisible-watermark >=0.2.0
  • isort >=5.5.4
  • jax >=0.4.1
  • jaxlib >=0.4.1
  • k-diffusion >=0.0.12
  • librosa *
  • numpy *
  • parameterized *
  • peft >=0.6.0
  • phonemizer *
  • protobuf <4,>=3.20.3
  • pytest *
  • pytest-timeout *
  • pytest-xdist *
  • regex *
  • requests *
  • requests-mock ==1.10.0
  • ruff ==0.1.5
  • safetensors >=0.3.1
  • scipy *
  • sentencepiece *
  • tensorboard *
  • torch >=1.4
  • torchvision *
  • transformers >=4.41.2
  • urllib3 <=2.0.0