Multi-view-AE
Multi-view-AE: A Python package for multi-view autoencoder models - Published in JOSS (2023)
Science Score: 100.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 8 DOI reference(s) in README and JOSS metadata -
✓Academic publication links
Links to: arxiv.org, joss.theoj.org -
✓Committers with academic emails
1 of 4 committers (25.0%) from academic institutions -
○Institutional organization owner
-
✓JOSS paper metadata
Published in Journal of Open Source Software
Keywords
Scientific Fields
Repository
Multi-view-AE: An extensive collection of multi-modal autoencoders implemented in a modular, scikit-learn style framework.
Basic Info
- Host: GitHub
- Owner: alawryaguila
- License: mit
- Language: Python
- Default Branch: master
- Homepage: https://multi-view-ae.readthedocs.io/en/latest/
- Size: 3.14 MB
Statistics
- Stars: 53
- Watchers: 2
- Forks: 5
- Open Issues: 8
- Releases: 10
Topics
Metadata Files
README.md
# Multi-modal representation learning using autoencoders

[](https://multi-view-ae.readthedocs.io/en/latest/?badge=latest)
[](https://github.com/alawryaguila/multi-view-ae)
[](https://joss.theoj.org/papers/10.21105/joss.05093)
[](https://pypi.org/project/multiviewae/)
[](https://codecov.io/gh/alawryaguila/multi-view-AE)
[](https://pypi.org/project/multiviewae/)
multi-view-AE is a collection of multi-modal autoencoder models for learning joint representations from multiple modalities of data. The package is structured such that all models have fit, predict_latents and predict_reconstruction methods. All models are built in Pytorch and Pytorch-Lightning.
Many of the models implemented in the multi-view-AE library have been benchmarked against previous implementations, with equal or improved results. See below for more details.
For more information on implemented models and how to use the package, please see the documentation.
Library schematic
Models Implemented
Below is a table with the models contained within this repository and links to the original papers.
|Model class |Model name |Number of views |Original work| |:------------:|:-------------------------------------------------------------------------------------------:|:----------------:|:-----------:| | mcVAE | Multi-Channel Variational Autoencoder (mcVAE) | >=1 |link| | AE | Multi-view Autoencoder | >=1 | | | mAAE | Multi-view Adversarial Autoencoder | >=1 | | | DVCCA | Deep Variational CCA | 2 |link| | mWAE | Multi-view Adversarial Autoencoder with a wasserstein loss | >=1 | | | mmVAE | Variational mixture-of-experts autoencoder (MMVAE) | >=1 |link| | mVAE | Multimodal Variational Autoencoder (MVAE) | >=1 |link| | memVAE | Multimodal Variational Autoencoder (MVAE) with separate ELBO terms for each view | >=1 |link| | JMVAE | Joint Multimodal Variational Autoencoder(JMVAE-kl) | 2 |link| | MVTCAE | Multi-View Total Correlation Auto-Encoder (MVTCAE) | >=1 |link| | MoPoEVAE | Mixture-of-Products-of-Experts VAE | >=1 |link| | mmJSD | Multimodal Jensen-Shannon divergence model (mmJSD) | >=1 |link| |weightedmVAE | Generalised Product-of-Experts Variational Autoencoder (gPoE-MVAE) | >=1 |link| | DMVAE | Disentangled multi-modal variational autoencoder | >=1 |link| |weighted_DMVAE| Disentangled multi-modal variational autoencoder with gPoE joint posterior | >=1 | | | mmVAEPlus | Mixture-of-experts multimodal VAE Plus (mmVAE+) | >=1 |link|
Installation
To install our package via pip:
bash
pip install multiviewae
Or, clone this repository and move to folder:
bash
git clone https://github.com/alawryaguila/multi-view-AE
cd multi-view-AE
Create the customised python environment:
bash
conda create --name mvae python=3.9
Activate python environment:
bash
conda activate mvae
Install the multi-view-AE package:
bash
pip install ./
Benchmarking results
To illustrate the efficacy of the multi-view-AE implementions, we validated some of the implemented models by reproducing a key result of a previous paper. One of the experiments presented in the paper was reproduced using the \texttt{multi-view-AE} implementations using the same network architectures, modelling choices, and training parameters. The code to reproduce the benchmarking experiments is available in the benchmarking folder. We evaluated performance using the joint log likelihood (↑) and conditional coherence accuracy (↑). Summary of the results of the benchmarking experiments using the BinaryMNIST and PolyMNIST datasets:
|Model |Experiment |Metric |Paper| Paper results| multi-view-AE results|
|:------------:|:---------:|:----------------:|:-----------:|:-----------:|:-----------:|
| JMVAE | BinaryMNIST | Joint log likelihood |link|-86.86 | -86.76±0.06 |
| me_mVAE | BinaryMNIST | Joint log likelihood |link|-86.26 | -86.31±0.08 |
| MoPoEVAE | PolyMNIST | Conditional Coherence accuracy |link|63/75/79/81 | 68/79/83/84 |
| mmJSD | PolyMNIST | Conditional Coherence accuracy |link|69/57/64/67 | 75/74/78/80 |
| mmVAE | PolyMNIST | Conditional Coherence accuracy |link|71/71/71/71 | 71/71/71/71 |
| MVTCAE | PolyMNIST | Conditional Coherence accuracy |link|59/77/83/86 | 64/81/87/90 |
| mmVAEPlus | PolyMNIST | Conditional Coherence accuracy |link| 85.2 | 86.6±0.07 |
Citation
If you have used multi-view-AE in your research, please consider citing our JOSS paper:
Lawry Aguila et al., (2023). Multi-view-AE: A Python package for multi-view autoencoder models. Journal of Open Source Software, 8(85), 5093, https://doi.org/10.21105/joss.05093
Bibtex entry:
bibtex
@article{LawryAguila2023,
doi = {10.21105/joss.05093},
url = {https://doi.org/10.21105/joss.05093},
year = {2023},
publisher = {The Open Journal},
volume = {8},
number = {85},
pages = {5093},
author = {Ana Lawry Aguila and Alejandra Jayme and Nina Montaña-Brown and Vincent Heuveline and Andre Altmann},
title = {Multi-view-AE: A Python package for multi-view autoencoder models}, journal = {Journal of Open Source Software}
}
Contribution guidelines
Contribution guidelines are available at https://multi-view-ae.readthedocs.io/en/latest/
Owner
- Login: alawryaguila
- Kind: user
- Repositories: 3
- Profile: https://github.com/alawryaguila
JOSS Publication
Multi-view-AE: A Python package for multi-view autoencoder models
Authors
Centre for Medical Image Computing (CMIC), Medical Physics and Biomedical Engineering, University College London (UCL), London, UK
Engineering Mathematics and Computing Lab (EMCL), Heidelberg, Germany
Centre for Medical Image Computing (CMIC), Medical Physics and Biomedical Engineering, University College London (UCL), London, UK
Engineering Mathematics and Computing Lab (EMCL), Heidelberg, Germany
Tags
Autoencoders Multi-view Unsupervised learning Representation learning Data generationCitation (CITATION.cff)
cff-version: 1.1.2 message: "If you use this software, please cite it as below." authors: - family-names: "Lawry Aguila" given-names: "Ana" orcid: "https://orcid.org/0000-0003-0727-3274" - family-names: "Jayme" given-names: "Alejandra" - family-names: "Montaña-Brown" given-names: "Nina" orcid: "https://orcid.org/0000-0001-5685-971X" - family-names: "Heuveline" given-names: "Vincent" - family-names: "Altmann" given-names: "Andre" orcid: "https://orcid.org/0000-0002-9265-2393" title: Multi-view-AE: A Python package for multi-view autoencoder models journal: Journal of Open Source Software url: https://doi.org/10.21105/joss.05093 date-released: 2023-05-16
GitHub Events
Total
- Watch event: 8
Last Year
- Watch event: 8
Committers
Last synced: 7 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| alawryaguila | a****a@o****m | 519 |
| Alejandra Jayme | a****e@i****e | 8 |
| Arfon Smith | a****n | 2 |
| Alejandra Jayme | a****e@h****g | 2 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 16
- Total pull requests: 32
- Average time to close issues: about 1 month
- Average time to close pull requests: about 2 hours
- Total issue authors: 4
- Total pull request authors: 4
- Average comments per issue: 0.81
- Average comments per pull request: 0.0
- Merged pull requests: 30
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 1
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 1
- Average comments per issue: 0
- Average comments per pull request: 0.0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- alawryaguila (13)
- NMontanaBrown (1)
- llevitis (1)
- Saran-nns (1)
Pull Request Authors
- alawryaguila (25)
- ajayme (3)
- llevitis (2)
- arfon (2)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- hydra-core *
- numpy >=1.23.1
- pandas >=1.4.3
- pytest >=7.1.2
- pytorch-lightning *
- scipy >=1.9.0
- torch >=1.12.0
- torchvision >=0.13.0
- actions/checkout v2 composite
- actions/setup-python v4 composite
- hydra-core *
- matplotlib *
- numpy *
- ordereddict *
- pandas *
- protobuf ==3.20.1
- pytest *
- pytorch-lightning ==1.5.10
- schema *
- scipy *
- sphinx *
- sphinx-autodoc-typehints *
- sphinx-gallery *
- torch >=1.10.1
- torchvision *
- hydra-core *
- matplotlib *
- numpy *
- ordereddict *
- pandas *
- pytest ^5.4.1
- python ^3.6
- pytorch-lightning 1.5.10
- schema *
- scipy *
- torch ^1.10.1
- torchvision *
- hydra-core *
- matplotlib *
- numpy *
- ordereddict *
- pandas *
- pytest >=5.4.1
- pytorch-lightning ==1.5.10
- schema *
- scipy *
- torch >=1.10.1
- torchvision *
- actions/checkout v2 composite
- actions/setup-python v2 composite