MultiVae

MultiVae: A Python package for Multimodal Variational Autoencoders on Partial Datasets. - Published in JOSS (2025)

https://github.com/agathesenellart/multivae

Science Score: 98.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 8 DOI reference(s) in README and JOSS metadata
  • Academic publication links
    Links to: arxiv.org, sciencedirect.com, joss.theoj.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
    Published in Journal of Open Source Software

Scientific Fields

Mathematics Computer Science - 84% confidence
Artificial Intelligence and Machine Learning Computer Science - 44% confidence
Last synced: 4 months ago · JSON representation ·

Repository

Unifying Multimodal Variational Autoencoders (VAEs) in Pytorch

Basic Info
  • Host: GitHub
  • Owner: AgatheSenellart
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Size: 11.1 MB
Statistics
  • Stars: 45
  • Watchers: 2
  • Forks: 13
  • Open Issues: 3
  • Releases: 4
Created almost 3 years ago · Last pushed 4 months ago
Metadata Files
Readme Contributing License Citation

README.md

MultiVae

Python Documentation Status DOI badge

This library implements some of the most common Multimodal Variational Autoencoders methods in a unifying framework for effective benchmarking and development. You can find the list of implemented models below. For easy benchmarking, we include ready-to-use datasets like MnistSvhn 🔢, CelebA 😎 and PolyMNIST, and metrics modules for computing: Coherences, Likelihoods and FID, Reconstruction metrics and Clustering Metrics. It integrates model monitoring with Wandb and a quick way to save/load model from HuggingFaceHub🤗. To improve joint generation of multimodal samples, we also propose samplers to explore the latent space of your model.

Implemented models

|Model|Paper|Official Implementation| |:---:|:----:|:---------------------:| |CVAE|An introduction to Variational Autoencoders | | |JMVAE|Joint Multimodal Learning with Deep Generative Models|link| |TELBO|Generative Models of Visually Grounded Imagination |link| |MVAE| Multimodal Generative Models for Scalable Weakly-Supervised Learning|link| |MMVAE|Variational Mixture-of-Experts Autoencoders for Multi-Modal Deep Generative Models|link| |MoPoE| Generalized Multimodal ELBO|link| |MVTCAE | Multi-View Representation Learning via Total Correlation Objective|link| DMVAE| Private-Shared Disentangled Multimodal VAE for Learning of Latent Representations|link | |JNF| Improving Multimodal Joint Variational Autoencoders through Normalizing Flows and Correlation Analysis | x | |MMVAE + |MMVAE+: ENHANCING THE GENERATIVE QUALITY OF MULTIMODAL VAES WITHOUT COMPROMISES | link| |Nexus | Leveraging hierarchy in multimodal generative models for effective cross-modality inference|link| |CMVAE| Deep Generative Clustering with Multimodal Diffusion Variational Autoencoders| link| |MHVAE| Unified Brain MR-Ultrasound Synthesis using Multi-Modal Hierarchical Representations |link| |CRMVAE| Mitigating the Limitations of Multimodal VAEs with Coordination-Based Approach | link|

Table of Contents

Installation

To get the latest stable release run:

shell pip install multivae

To get the latest updates from the github repository run:

shell git clone https://github.com/AgatheSenellart/MultiVae.git cd MultiVae pip install . Cloning the repository also gives you access to the tutorial notebooks and scripts in the 'example' folder.

Quickstart

Here is a very simple code to illustrate how you can use MultiVae: ```python

Load a dataset

from multivae.data.datasets import MnistSvhn trainset = MnistSvhn(datapath='./data', split="train", download=True)

Instantiate your favorite model:

from multivae.models import MVTCAE, MVTCAEConfig modelconfig = MVTCAEConfig( nmodalities=2, latentdim=20, inputdims = {'mnist' : (1,28,28),'svhn' : (3,32,32)} ) model = MVTCAE(model_config)

Define a trainer and train the model !

from multivae.trainers import BaseTrainer, BaseTrainerConfig trainingconfig = BaseTrainerConfig( learningrate=1e-3, num_epochs=10 )

trainer = BaseTrainer( model=model, traindataset=trainset, trainingconfig=trainingconfig, ) trainer.train() ```

Getting your hands on the code

(Back to top)

Our library allows you to use any of the models with custom configuration, encoders and decoders architectures and datasets easily. To learn how to use MultiVae's features we propose different tutorial notebooks:

  • Getting started : Learn how to provide your own architectures and train a model.
  • Computing Metrics : Learn how to evaluate your model using MultiVae's metrics modules.
  • Learning with partial datasets : Learn how to use the IncompleteDataset class and to train a model on an incomplete dataset.
  • Using samplers: Learn how to train and use sampler to improve the joint generation of synthetic data.
  • Using WandB: Learn how to easily monitor your training/evaluation with Wandb and MultiVae.

Training on incomplete datasets

Many models implemented in the library can be trained on incomplete datasets. To do so, you will need to define a dataset that inherits from MultiVae's IncompleteDataset class.

For a step-by-step tutorial on training on incomplete datasets, see this notebook.

How does MultiVae handles partial data ? We handle partial data by sampling random batchs, artificially filling the missing modalities, and using the mask to compute the final loss.

This allows for unbiased mini-batches. There are other ways to handle missing data (for instance using a batch sampler): don't hesitate to reach out if you would like additional options!

image

For more details on how each model is adapted to the partial view setting, see the model's description in the documentation.

Below is the list of models that can be used on Incomplete datasets:

|Model|Can be used on Incomplete Datasets|Details| |:---:|:----:|:--:| |CVAE|❌ | |JMVAE|❌ | |TELBO|❌ | |MVAE| ✅|see here| |MMVAE|✅|see here |MoPoE| ✅|see here |MVTCAE |✅|see here DMVAE| ✅ | see here |JNF| ❌ | |MMVAE+ |✅|see here |Nexus | ✅|see here |CMVAE| ✅|see here |MHVAE| ✅|see here |CRMVAE|✅|see here

Toy datasets with missing values

To ease the development of new methods on incomplete datasets, we propose two easy-to-import toy datasets with missing values: - Missing at Random: The PolyMNIST dataset with missing values. - Missing not at Random: The MHD dataset with missing ratios that depend on the label.

See the documentation for more information on those datasets.

Metrics

We provide metrics modules that can be used on any MultiVae model for evaluation. See the documentation for minimal code examples and see this notebook for a hands-on tutorial.

Datasets

At the time, we provide 7 ready-to-use multimodal datasets with an automatic download option. Click here to see the options.

Monitoring your training with Wandb

MultiVae allows easy monitoring with Wandb. To use this feature, you will need to install and configure Wandb with the few steps below:

Install Wandb

  1. Install wandb $ pip install wandb
  2. Create a wandb account online
  3. Once you are logged in, go to this page and copy the API key.
  4. In your terminal, enter $ wandb login and then copy your API key when prompted.

Once this is done, you can use wandb features in MultiVae.

Monitor training with Wandb

Below is a minimal example on how to use the WandbCallback to monitor your training. We suppose that you have already defined a model and a train_dataset in that example.

By default, the train loss, eval loss and metrics specific to the model will be logged to wandb. If you set the steps_predict in the trainer config, images of generation will also be logged to wandb.

``` python

from multivae.trainers import BaseTrainer, BaseTrainerConfig from multivae.trainers.base.callbacks import WandbCallback

Define training configuration

yourtrainingconfig = BaseTrainerConfig( learningrate=1e-2, stepspredict=5 # generate samples every 5 steps. Images will be logged to wandb. )

Define the wandb callback

wandbcb = WandbCallback() wandbcb.setup( trainingconfig=yourtrainingconfig, # will be saved to wandb modelconfig=yourmodelconfig, #will be saved to wandb projectname='yourproject_name' )

Pass the wandb callback to trainer to enable metrics and images logging to wandb

trainer = BaseTrainer( model=yourmodel, traindataset=traindata, callbacks=[wandbcb] )

```

Logging evaluation metrics to Wandb

The metrics modules of MultiVae can also be used with Wandb, to save all your results in one place.

If you have a trained model, and you want to compute some metrics for that model, you can pass a wandb_path to the metric module to tell it where to log the metrics. If there is already a wandb run that was created during training, you can reuse the same wandbpath to log metrics to that same place. See this documentation to learn how to find your wandbpath or re-create one.

Below is a minimal example with the LikelihoodEvaluator Module but it works the same for all metrics.

``` python

from multivae.metrics import LikelihoodsEvaluator, LikelihoodsEvaluatorConfig

llconfig = LikelihoodsEvaluatorConfig( batchsize=128, numsamples=3, wandbpath= 'yourwandbpath' # Pass your wandb_path here )

llmodule = LikelihoodsEvaluator(model=yourmodel, output='./metrics',# where to log the metrics testdataset=testset, evalconfig=llconfig) ```

Sharing your models with the HuggingFace Hub 🤗

MultiVae allows you to share your models on the HuggingFace Hub. To do so you need: - a valid HuggingFace account - the package huggingface_hub installed in your virtual env. If not you can install it with $ python -m pip install huggingface_hub - to be logged in to your HuggingFace account using $ huggingface-cli login

Uploading a model to the Hub

Any MultiVae model can be easily uploaded using the method push_to_hf_hub ```python

mymodel.pushtohfhub(hfhubpath="yourhfusername/yourhfhubrepo") `` **Note:** Ifyourhfhubrepoalready exists and is not empty, files will be overridden. In case, the repoyourhfhub_repo` does not exist, a folder having the same name will be created.

Downloading models from the Hub

Equivalently, you can download or reload any MultiVae model directly from the Hub using the method load_from_hf_hub ```python

from multivae.models import AutoModel mydownloadedvae = AutoModel.loadfromhfhub(hfhubpath="pathtohfrepo") ```

Using samplers

All MultiVae's models have a natural way of generating fully synthetic multimodal samples by sampling latent codes from the prior distribution of the model. But it is well known for unimodal VAEs (and the same applies to multimodal VAEs) that generation can be improved by using a more fitting distribution to sample encodings the latent space.

Once you have a trained MultiVae model, you can fit a multivae.sampler to approximate the a posteriori distribution of encodings in the latent space and then use it to produce new samples.

We provide a minimal example on how to fit a GMM sampler but we invite you to check out our tutorial notebook here for a more in-depth explanation on how to use samplers and how to combine them with MultiVae metrics modules.

``` python from multivae.samplers import GaussianMixtureSampler, GaussianMixtureSamplerConfig

config = GaussianMixtureSamplerConfig( n_components=10 # number of components to use in the mixture )

gmmsampler = GaussianMixtureSampler(model=yourmodel, sampler_config=config)

gmmsampler.fit(traindata) # train_data is the Multimodal Dataset used for training the model. ```

Note that samplers can be used with all MultiVae models and that they can really improve joint generation. For a taste of what it can do, see the joint generations below for a MVTCAE model trained on PolyMNIST: alt text

Documentation, Examples and Case Studies

We provide a full online documentation at https://multivae.readthedocs.io.

Several examples are provided in examples/ - as well as tutorial notebooks on how to use the main features of MultiVae(training, metrics, samplers) in the folder examples/tutorial_notebooks.

For more advanced examples on how to use MultiVae we provide small case-studies with code and results:

Contribute

(Back to top)

If you want to contribute to the project, for instance by adding models to the library: clone the repository and install it in editable mode by using the -e option shell pip install -e . We propose contributing guidelines here with tutorials on how to implement a new model, sampler, metrics or dataset.

Reproducibility statement

Most implemented models are validated by reproducing a key result of the paper. Here we provide details on the results we managed to reproduce.

|Model|Dataset|Metrics|Paper|Ours| |--|--|--|--|--| |JMVAE|Mnist|Likelihood|-86.86|-86.85 +- 0.03| |MMVAE|MnistSVHN|Coherences|86/69/42 | 88/67/41| |MVAE|Mnist|ELBO|188.8 |188.3 +-0.4| |DMVAE|MnistSVHN|Coherences|88.1/83.7/44.7|89.2/81.3/46.0| |MoPoE| PolyMNIST| Coherences|66/77/81/83|67/79/84/85| |MVTCAE|PolyMNIST|Coherences|69/77/83/86|64/82/88/91| |MMVAE+|PolyMNIST|Coherences/FID|86.9/92.81|88.6 +-0;8/ 93+-5| |CMVAE|PolyMNIST|Coherences|89.7/78.1|88.6/76.4| |CRMVAE| Translated PolyMNIST|Coherences| 0.145/0.172/0.192/0.21 |0.16/0.19/0.205/0.21|

Note that we also tried to reproduce results for the Nexus model, but didn't obtain similar results as the ones presented in the original paper. If you spot a difference between our implementation and theirs, please reach out to us.

Citation

(Back to top)

If you have used our package in your research, please cite our JOSS paper: MultiVae: A Python package for Multimodal Variational Autoencoders on Partial Datasets.

You can find the bibtex citation below:

@article{Senellart2025, doi = {10.21105/joss.07996}, url = {https://doi.org/10.21105/joss.07996}, year = {2025}, publisher = {The Open Journal}, volume = {10}, number = {110}, pages = {7996}, author = {Agathe Senellart and Clément Chadebec and Stéphanie Allassonnière}, title = {MultiVae: A Python package for Multimodal Variational Autoencoders on Partial Datasets.}, journal = {Journal of Open Source Software} }

Issues ? Questions ?

If you encounter any issues using our package or if you would like to request features, don't hesitate to open an issue here and we will do our best to fix it !

Owner

  • Login: AgatheSenellart
  • Kind: user

JOSS Publication

MultiVae: A Python package for Multimodal Variational Autoencoders on Partial Datasets.
Published
June 05, 2025
Volume 10, Issue 110, Page 7996
Authors
Agathe Senellart ORCID
Université Paris Cité, Inria, Inserm, HeKA, F-75015 Paris, France
Clément Chadebec
Université Paris Cité, Inria, Inserm, HeKA, F-75015 Paris, France
Stéphanie Allassonnière
Université Paris Cité, Inria, Inserm, HeKA, F-75015 Paris, France
Editor
Øystein Sørensen ORCID
Tags
Pytorch Variational Autoencoders Multimodality Missing data

Citation (CITATION.cff)

cff-version: "1.2.0"
authors:
- family-names: Senellart
  given-names: Agathe
  orcid: "https://orcid.org/0009-0000-3176-6461"
- family-names: Chadebec
  given-names: Clément
- family-names: Allassonnière
  given-names: Stéphanie
contact:
- family-names: Senellart
  given-names: Agathe
  orcid: "https://orcid.org/0009-0000-3176-6461"
doi: 10.5281/zenodo.15577722
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Senellart
    given-names: Agathe
    orcid: "https://orcid.org/0009-0000-3176-6461"
  - family-names: Chadebec
    given-names: Clément
  - family-names: Allassonnière
    given-names: Stéphanie
  date-published: 2025-06-05
  doi: 10.21105/joss.07996
  issn: 2475-9066
  issue: 110
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 7996
  title: "MultiVae: A Python package for Multimodal Variational
    Autoencoders on Partial Datasets."
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.07996"
  volume: 10
title: "MultiVae: A Python package for Multimodal Variational
  Autoencoders on Partial Datasets."

GitHub Events

Total
  • Create event: 24
  • Release event: 2
  • Issues event: 12
  • Watch event: 21
  • Delete event: 21
  • Issue comment event: 23
  • Push event: 252
  • Pull request event: 48
  • Pull request review event: 11
  • Fork event: 10
Last Year
  • Create event: 24
  • Release event: 2
  • Issues event: 12
  • Watch event: 21
  • Delete event: 21
  • Issue comment event: 23
  • Push event: 252
  • Pull request event: 48
  • Pull request review event: 11
  • Fork event: 10

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 6
  • Total pull requests: 52
  • Average time to close issues: 20 days
  • Average time to close pull requests: 3 days
  • Total issue authors: 5
  • Total pull request authors: 5
  • Average comments per issue: 0.33
  • Average comments per pull request: 0.27
  • Merged pull requests: 42
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 5
  • Pull requests: 28
  • Average time to close issues: about 20 hours
  • Average time to close pull requests: 1 day
  • Issue authors: 4
  • Pull request authors: 2
  • Average comments per issue: 0.2
  • Average comments per pull request: 0.39
  • Merged pull requests: 20
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • osorensen (2)
  • Saadmohamad (1)
  • AgatheSenellart (1)
  • deweihu96 (1)
  • powerfulbean (1)
  • tomastokar (1)
Pull Request Authors
  • AgatheSenellart (40)
  • osorensen (8)
  • clementchadebec (8)
  • fcaretti (2)
  • CorentinAmbroise (1)
  • jsenellart (1)
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 51 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 5
  • Total maintainers: 1
pypi.org: multivae

Unifying Generative Multimodel Variational Autoencoders in Pytorch

  • Versions: 5
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 51 Last month
Rankings
Dependent packages count: 7.5%
Stargazers count: 18.6%
Forks count: 22.9%
Average: 40.1%
Dependent repos count: 69.6%
Downloads: 81.8%
Maintainers (1)
Last synced: 4 months ago

Dependencies

.github/workflows/code_coverage.yml actions
  • actions/checkout main composite
  • actions/setup-python main composite
  • codecov/codecov-action v2 composite
.github/workflows/tests_bench.yml actions
  • actions/checkout main composite
  • actions/setup-python main composite
docs/requirements.txt pypi
  • Sphinx ==4.1.2
  • sphinx-rtd-theme ==0.5.2
  • sphinxcontrib-applehelp ==1.0.2
  • sphinxcontrib-bibtex ==2.3.0
  • sphinxcontrib-devhelp ==1.0.2
  • sphinxcontrib-htmlhelp ==2.0.0
  • sphinxcontrib-jsmath ==1.0.1
  • sphinxcontrib-qthelp ==1.0.3
  • sphinxcontrib-serializinghtml ==1.1.5
pyproject.toml pypi
setup.py pypi
  • cloudpickle >=2.1.0
  • dataclasses >=0.6
  • imageio *
  • nltk *
  • numpy >=1.19
  • pandas *
  • pydantic <2.0.0
  • pythae *
  • scikit-learn *
  • scipy >=1.7.1
  • torch >=1.10.1
  • torchmetrics *
  • torchvision *
  • tqdm *
  • typing_extensions *