pythae
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Science Score: 64.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
✓Committers with academic emails
1 of 18 committers (5.6%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.8%) to scientific vocabulary
Keywords
Repository
Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)
Basic Info
Statistics
- Stars: 1,922
- Watchers: 17
- Forks: 174
- Open Issues: 33
- Releases: 11
Topics
Metadata Files
README.md
pythae
This library implements some of the most common (Variational) Autoencoder models under a unified implementation. In particular, it provides the possibility to perform benchmark experiments and comparisons by training the models with the same autoencoding neural network architecture. The feature make your own autoencoder allows you to train any of these models with your own data and own Encoder and Decoder neural networks. It integrates experiment monitoring tools such wandb, mlflow or comet-ml 🧪 and allows model sharing and loading from the HuggingFace Hub 🤗 in a few lines of code.
News 📢
As of v0.1.0, Pythae now supports distributed training using PyTorch's DDP. You can now train your favorite VAE faster and on larger datasets, still with a few lines of code.
See our speed-up benchmark.
Quick access:
- Installation
- Implemented models / Implemented samplers
- Reproducibility statement / Results flavor
- Model training / Data generation / Custom network architectures / Distributed training
- Model sharing with 🤗 Hub / Experiment tracking with
wandb/ Experiment tracking withmlflow/ Experiment tracking withcomet_ml - Tutorials / Documentation
- Contributing 🚀 / Issues 🛠️
- Citing this repository
Installation
To install the latest stable release of this library run the following using pip
bash
$ pip install pythae
To install the latest github version of this library run the following using pip
bash
$ pip install git+https://github.com/clementchadebec/benchmark_VAE.git
or alternatively you can clone the github repo to access to tests, tutorials and scripts.
bash
$ git clone https://github.com/clementchadebec/benchmark_VAE.git
and install the library
bash
$ cd benchmark_VAE
$ pip install -e .
Available Models
Below is the list of the models currently implemented in the library.
| Models | Training example | Paper | Official Implementation |
|:----------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------:|:--------------------------------------------------------------------------:|
| Autoencoder (AE) | | | |
| Variational Autoencoder (VAE) |
| link |
| Beta Variational Autoencoder (BetaVAE) |
| link |
VAE with Linear Normalizing Flows (VAELinNF) | [](https://colab.research.google.com/github/clementchadebec/benchmarkVAE/blob/main/examples/notebooks/modelstraining/vaelinnftraining.ipynb) | link |
VAE with Inverse Autoregressive Flows (VAEIAF) | [](https://colab.research.google.com/github/clementchadebec/benchmarkVAE/blob/main/examples/notebooks/modelstraining/vaeiaftraining.ipynb) | link | link |
| Disentangled Beta Variational Autoencoder (DisentangledBetaVAE) | [
](https://colab.research.google.com/github/clementchadebec/benchmarkVAE/blob/main/examples/notebooks/modelstraining/disentangledbetavaetraining.ipynb) | link |
| Disentangling by Factorising (FactorVAE) | | link | |
| Beta-TC-VAE (BetaTCVAE) |
| link | link
| Importance Weighted Autoencoder (IWAE) |
| link | link
| Multiply Importance Weighted Autoencoder (MIWAE) | | link |
| Partially Importance Weighted Autoencoder (PIWAE) | | link |
| Combination Importance Weighted Autoencoder (CIWAE) | | link | |
| VAE with perceptual metric similarity (MSSSIMVAE) | [
](https://colab.research.google.com/github/clementchadebec/benchmarkVAE/blob/main/examples/notebooks/modelstraining/msssimvaetraining.ipynb) | link |
| Wasserstein Autoencoder (WAE) |
| link | link |
| Info Variational Autoencoder (INFOVAEMMD) | [
](https://colab.research.google.com/github/clementchadebec/benchmarkVAE/blob/main/examples/notebooks/modelstraining/infovaetraining.ipynb) | link | |
| VAMP Autoencoder (VAMP) | [
](https://colab.research.google.com/github/clementchadebec/benchmarkVAE/blob/main/examples/notebooks/modelstraining/vamptraining.ipynb) | link | link |
| Hyperspherical VAE (SVAE) |
| link | link
| Poincaré Disk VAE (PoincareVAE) |
| link | link |
| Adversarial Autoencoder (AdversarialAE) | [
](https://colab.research.google.com/github/clementchadebec/benchmarkVAE/blob/main/examples/notebooks/modelstraining/adversarialaetraining.ipynb) | link
| Variational Autoencoder GAN (VAEGAN) 🥗 | [
](https://colab.research.google.com/github/clementchadebec/benchmarkVAE/blob/main/examples/notebooks/modelstraining/vaegantraining.ipynb) | link | link| link | link
| Vector Quantized VAE (VQVAE) |
| link | link
| Hamiltonian VAE (HVAE) |
| link | link |
| Regularized AE with L2 decoder param (RAEL2) | [
](https://colab.research.google.com/github/clementchadebec/benchmarkVAE/blob/main/examples/notebooks/modelstraining/rael2training.ipynb) | link | link |
| Regularized AE with gradient penalty (RAEGP) |
| link | link |
| Riemannian Hamiltonian VAE (RHVAE) |
| link | link|
| Hierarchical Residual Quantization (HRQVAE) |
| link | link|
See reconstruction and generation results for all aforementionned models
Available Samplers
Below is the list of the models currently implemented in the library.
| Samplers | Models | Paper | Official Implementation |
|:-------------------------------------:|:-------------------:|:-------------------------------------------------:|:-----------------------------------------:|
| Normal prior (NormalSampler) | all models | link |
| Gaussian mixture (GaussianMixtureSampler) | all models | link | link |
| Two stage VAE sampler (TwoStageVAESampler) | all VAE based models| link | link |)
| Unit sphere uniform sampler (HypersphereUniformSampler) | SVAE | link | link
| Poincaré Disk sampler (PoincareDiskSampler) | PoincareVAE | link | link
| VAMP prior sampler (VAMPSampler) | VAMP | link | link |
| Manifold sampler (RHVAESampler) | RHVAE | link | link|
| Masked Autoregressive Flow Sampler (MAFSampler) | all models | link | link |
| Inverse Autoregressive Flow Sampler (IAFSampler) | all models | link | link |
| PixelCNN (PixelCNNSampler) | VQVAE | link | |
Reproducibility
We validate the implementations by reproducing some results presented in the original publications when the official code has been released or when enough details about the experimental section of the papers were available. See reproducibility for more details.
Launching a model training
To launch a model training, you only need to call a TrainingPipeline instance.
```python
from pythae.pipelines import TrainingPipeline from pythae.models import VAE, VAEConfig from pythae.trainers import BaseTrainerConfig
Set up the training configuration
mytrainingconfig = BaseTrainerConfig( ... outputdir='mymodel', ... numepochs=50, ... learningrate=1e-3, ... perdevicetrainbatchsize=200, ... perdeviceevalbatchsize=200, ... traindataloadernumworkers=2, ... evaldataloadernumworkers=2, ... stepssaving=20, ... optimizercls="AdamW", ... optimizerparams={"weightdecay": 0.05, "betas": (0.91, 0.995)}, ... schedulercls="ReduceLROnPlateau", ... schedulerparams={"patience": 5, "factor": 0.5} ... )
Set up the model configuration
myvaeconfig = modelconfig = VAEConfig( ... inputdim=(1, 28, 28), ... latent_dim=10 ... )
Build the model
myvaemodel = VAE( ... modelconfig=myvae_config ... )
Build the Pipeline
pipeline = TrainingPipeline( ... trainingconfig=mytrainingconfig, ... model=myvae_model ... )
Launch the Pipeline
pipeline( ... traindata=yourtraindata, # must be torch.Tensor, np.array or torch datasets ... evaldata=yourevaldata # must be torch.Tensor, np.array or torch datasets ... ) ```
At the end of training, the best model weights, model configuration and training configuration are stored in a final_model folder available in my_model/MODEL_NAME_training_YYYY-MM-DD_hh-mm-ss (with my_model being the output_dir argument of the BaseTrainerConfig). If you further set the steps_saving argument to a certain value, folders named checkpoint_epoch_k containing the best model weights, optimizer, scheduler, configuration and training configuration at epoch k will also appear in my_model/MODEL_NAME_training_YYYY-MM-DD_hh-mm-ss.
Launching a training on benchmark datasets
We also provide a training script example here that can be used to train the models on benchmarks datasets (mnist, cifar10, celeba ...). The script can be launched with the following commandline
bash
python training.py --dataset mnist --model_name ae --model_config 'configs/ae_config.json' --training_config 'configs/base_training_config.json'
See README.md for further details on this script
Launching data generation
Using the GenerationPipeline
The easiest way to launch a data generation from a trained model consists in using the built-in GenerationPipeline provided in Pythae. Say you want to generate 100 samples using a MAFSampler all you have to do is 1) relaod the trained model, 2) define the sampler's configuration and 3) create and launch the GenerationPipeline as follows
```python
from pythae.models import AutoModel from pythae.samplers import MAFSamplerConfig from pythae.pipelines import GenerationPipeline
Retrieve the trained model
mytrainedvae = AutoModel.loadfromfolder( ... 'path/to/your/trained/model' ... ) mysamplerconfig = MAFSamplerConfig( ... nmadeblocks=2, ... nhiddeninmade=3, ... hiddensize=128 ... )
Build the pipeline
pipe = GenerationPipeline( ... model=mytrainedvae, ... samplerconfig=mysampler_config ... )
Launch data generation
generatedsamples = pipe( ... numsamples=args.numsamples, ... returngen=True, # If false returns nothing ... traindata=traindata, # Needed to fit the sampler ... evaldata=evaldata, # Needed to fit the sampler ... trainingconfig=BaseTrainerConfig(numepochs=200) # TrainingConfig to use to fit the sampler ... ) ```
Using the Samplers
Alternatively, you can launch the data generation process from a trained model directly with the sampler. For instance, to generate new data with your sampler, run the following.
```python
from pythae.models import AutoModel from pythae.samplers import NormalSampler
Retrieve the trained model
mytrainedvae = AutoModel.loadfromfolder( ... 'path/to/your/trained/model' ... )
Define your sampler
mysamper = NormalSampler( ... model=mytrained_vae ... )
Generate samples
gendata = mysamper.sample( ... numsamples=50, ... batchsize=10, ... outputdir=None, ... returngen=True ... ) ``
If you setoutput_dirto a specific path, the generated images will be saved as.pngfiles named00000000.png,00000001.png... The samplers can be used with any model as long as it is suited. For instance, aGaussianMixtureSamplerinstance can be used to generate from any model but aVAMPSamplerwill only be usable with aVAMPmodel. Check [here](#available-samplers) to see which ones apply to your model. Be carefull that some samplers such as theGaussianMixtureSamplerfor instance may need to be fitted by calling thefitmethod before using. Below is an example for theGaussianMixtureSampler`.
```python
from pythae.models import AutoModel from pythae.samplers import GaussianMixtureSampler, GaussianMixtureSamplerConfig
Retrieve the trained model
mytrainedvae = AutoModel.loadfromfolder( ... 'path/to/your/trained/model' ... )
Define your sampler
... gmmsamplerconfig = GaussianMixtureSamplerConfig( ... ncomponents=10 ... ) mysamper = GaussianMixtureSampler( ... samplerconfig=gmmsamplerconfig, ... model=mytrained_vae ... )
fit the sampler
gmmsampler.fit(traindataset)
Generate samples
gendata = mysamper.sample( ... numsamples=50, ... batchsize=10, ... outputdir=None, ... returngen=True ... ) ```
Define you own Autoencoder architecture
Pythae provides you the possibility to define your own neural networks within the VAE models. For instance, say you want to train a Wassertstein AE with a specific encoder and decoder, you can do the following:
```python
from pythae.models.nn import BaseEncoder, BaseDecoder from pythae.models.base.baseutils import ModelOutput class MyEncoder(BaseEncoder): ... def init(self, args=None): # Args is a ModelConfig instance ... BaseEncoder.init(self) ... self.layers = mynnlayers() ...
... def forward(self, x:torch.Tensor) -> ModelOutput: ... out = self.layers(x) ... output = ModelOutput( ... embedding=out # Set the output from the encoder in a ModelOutput instance ... ) ... return output ... ... class MyDecoder(BaseDecoder): ... def _init(self, args=None): ... BaseDecoder.init_(self) ... self.layers = mynnlayers() ...
... def forward(self, x:torch.Tensor) -> ModelOutput: ... out = self.layers(x) ... output = ModelOutput( ... reconstruction=out # Set the output from the decoder in a ModelOutput instance ... ) ... return output ... myencoder = MyEncoder() mydecoder = My_Decoder() ```
And now build the model
```python
from pythae.models import WAEMMD, WAEMMD_Config
Set up the model configuration
mywaeconfig = modelconfig = WAEMMDConfig( ... inputdim=(1, 28, 28), ... latent_dim=10 ... ) ...
Build the model
mywaemodel = WAEMMD( ... modelconfig=mywaeconfig, ... encoder=myencoder, # pass your encoder as argument when building the model ... decoder=mydecoder # pass your decoder as argument when building the model ... ) ```
important note 1: For all AE-based models (AE, WAE, RAEL2, RAEGP), both the encoder and decoder must return a ModelOutput instance. For the encoder, the ModelOutput instance must contain the embbeddings under the key embedding. For the decoder, the ModelOutput instance must contain the reconstructions under the key reconstruction.
important note 2: For all VAE-based models (VAE, BetaVAE, IWAE, HVAE, VAMP, RHVAE), both the encoder and decoder must return a ModelOutput instance. For the encoder, the ModelOutput instance must contain the embbeddings and log-covariance matrices (of shape batchsize x latentspacedim) respectively under the key embedding and `logcovariancekey. For the decoder, theModelOutputinstance must contain the reconstructions under the keyreconstruction`.
Using benchmark neural nets
You can also find predefined neural network architectures for the most common data sets (i.e. MNIST, CIFAR, CELEBA ...) that can be loaded as follows
```python
from pythae.models.nn.benchmark.mnist import ( ... EncoderConvAEMNIST, # For AE based model (only return embeddings) ... EncoderConvVAEMNIST, # For VAE based model (return embeddings and logcovariances) ... DecoderConvAEMNIST ... ) ``` Replace mnist by cifar or celeba to access to other neural nets.
Distributed Training with Pythae
As of v0.1.0, Pythae now supports distributed training using PyTorch's DDP. It allows you to train your favorite VAE faster and on larger dataset using multi-gpu and/or multi-node training.
To do so, you can build a python script that will then be launched by a launcher (such as srun on a cluster). The only thing that is needed in the script is to specify some elements relative to the distributed environment (such as the number of nodes/gpus) directly in the training configuration as follows
```python
trainingconfig = BaseTrainerConfig( ... numepochs=10, ... learningrate=1e-3, ... perdevicetrainbatchsize=64, ... perdeviceevalbatchsize=64, ... traindataloadernumworkers=8, ... evaldataloadernumworkers=8, ... distbackend="nccl", # distributed backend ... worldsize=8 # number of gpus to use (nnodes x ngpuspernode), ... rank=5 # global gpu id, ... localrank=1 # gpu id within a node, ... masteraddr="localhost" # master address, ... masterport="12345" # master port, ... ) ```
See this example script that defines a multi-gpu VQVAE training on ImageNet dataset. Please note that the way the distributed environnement variables (world_size, rank ...) are recovered may be specific to the cluster and launcher you use.
Benchmark
Below are indicated the training times for a Vector Quantized VAE (VQ-VAE) with Pythae for 100 epochs on MNIST on V100 16GB GPU(s), for 50 epochs on FFHQ (1024x1024 images) and for 20 epochs on ImageNet-1k on V100 32GB GPU(s).
| | Train Data | 1 GPU | 4 GPUs | 2x4 GPUs | |:---:|:---:|:---:|:---:|---| | MNIST (VQ-VAE) | 28x28 images (50k) | 235.18 s | 62.00 s | 35.86 s | | FFHQ 1024x1024 (VQVAE) | 1024x1024 RGB images (60k) | 19h 1min | 5h 6min | 2h 37min | | ImageNet-1k 128x128 (VQVAE) | 128x128 RGB images (~ 1.2M) | 6h 25min | 1h 41min | 51min 26s |
For each dataset, we provide the benchmarking scripts here
Sharing your models with the HuggingFace Hub 🤗
Pythae also allows you to share your models on the HuggingFace Hub. To do so you need:
- a valid HuggingFace account
- the package huggingface_hub installed in your virtual env. If not you can install it with
$ python -m pip install huggingface_hub
- to be logged in to your HuggingFace account using
$ huggingface-cli login
Uploading a model to the Hub
Any pythae model can be easily uploaded using the method push_to_hf_hub
```python
myvaemodel.pushtohfhub(hfhubpath="yourhfusername/yourhfhubrepo") ``
**Note:** Ifyourhfhubrepoalready exists and is not empty, files will be overridden. In case, the repoyourhfhubrepo` does not exist, a folder having the same name will be created.
Downloading models from the Hub
Equivalently, you can download or reload any Pythae's model directly from the Hub using the method load_from_hf_hub
```python
from pythae.models import AutoModel mydownloadedvae = AutoModel.loadfromhfhub(hfhubpath="pathtohfrepo") ```
Monitoring your experiments with wandb 🧪
Pythae also integrates the experiment tracking tool wandb allowing users to store their configs, monitor their trainings and compare runs through a graphic interface. To be able use this feature you will need:
- a valid wandb account
- the package wandb installed in your virtual env. If not you can install it with
$ pip install wandb
- to be logged in to your wandb account using
$ wandb login
Creating a WandbCallback
Launching an experiment monitoring with wandb in pythae is pretty simple. The only thing a user needs to do is create a WandbCallback instance...
```python
Create you callback
from pythae.trainers.trainingcallbacks import WandbCallback callbacks = [] # the TrainingPipeline expects a list of callbacks wandbcb = WandbCallback() # Build the callback
SetUp the callback
wandbcb.setup( ... trainingconfig=yourtrainingconfig, # training config ... modelconfig=yourmodelconfig, # model config ... projectname="yourwandbproject", # specify your wandb project ... entityname="yourwandbentity", # specify your wandb entity ... ) callbacks.append(wandbcb) # Add it to the callbacks list
...and then pass it to the `TrainingPipeline`.python pipeline = TrainingPipeline( ... trainingconfig=config, ... model=model ... ) pipeline( ... traindata=traindataset, ... evaldata=eval_dataset, ... callbacks=callbacks # pass the callbacks to the TrainingPipeline and you are done! ... )You can log to https://wandb.ai/yourwandbentity/yourwandbproject to monitor your training
``` See the detailed tutorial
Monitoring your experiments with mlflow 🧪
Pythae also integrates the experiment tracking tool mlflow allowing users to store their configs, monitor their trainings and compare runs through a graphic interface. To be able use this feature you will need:
- the package mlfow installed in your virtual env. If not you can install it with
$ pip install mlflow
Creating a MLFlowCallback
Launching an experiment monitoring with mlfow in pythae is pretty simple. The only thing a user needs to do is create a MLFlowCallback instance...
```python
Create you callback
from pythae.trainers.trainingcallbacks import MLFlowCallback callbacks = [] # the TrainingPipeline expects a list of callbacks mlflowcb = MLFlowCallback() # Build the callback
SetUp the callback
mlflowcb.setup( ... trainingconfig=yourtrainingconfig, # training config ... modelconfig=yourmodelconfig, # model config ... runname="mlflowcbexample", # specify your mlflow run ... ) callbacks.append(mlflowcb) # Add it to the callbacks list
...and then pass it to the `TrainingPipeline`.python pipeline = TrainingPipeline( ... trainingconfig=config, ... model=model ... ) pipeline( ... traindata=traindataset, ... evaldata=evaldataset, ... callbacks=callbacks # pass the callbacks to the TrainingPipeline and you are done! ... )you can visualize your metric by running the following in the directory where the `./mlruns`bash $ mlflow ui ``` See the detailed tutorial
Monitoring your experiments with comet_ml 🧪
Pythae also integrates the experiment tracking tool comet_ml allowing users to store their configs, monitor their trainings and compare runs through a graphic interface. To be able use this feature you will need:
- the package comet_ml installed in your virtual env. If not you can install it with
$ pip install comet_ml
Creating a CometCallback
Launching an experiment monitoring with comet_ml in pythae is pretty simple. The only thing a user needs to do is create a CometCallback instance...
```python
Create you callback
from pythae.trainers.trainingcallbacks import CometCallback callbacks = [] # the TrainingPipeline expects a list of callbacks cometcb = CometCallback() # Build the callback
SetUp the callback
cometcb.setup( ... trainingconfig=trainingconfig, # training config ... modelconfig=modelconfig, # model config ... apikey="yourcometapikey", # specify your comet api-key ... projectname="yourcometproject", # specify your wandb project ... #offlinerun=True, # run in offline mode ... #offlinedirectory='myofflineruns' # set the directory to store the offline runs ... ) callbacks.append(cometcb) # Add it to the callbacks list
...and then pass it to the `TrainingPipeline`.python pipeline = TrainingPipeline( ... trainingconfig=config, ... model=model ... ) pipeline( ... traindata=traindataset, ... evaldata=evaldataset, ... callbacks=callbacks # pass the callbacks to the TrainingPipeline and you are done! ... )You can log to https://comet.com/yourcometusername/yourcometproject to monitor your training
``` See the detailed tutorial
Getting your hands on the code
To help you to understand the way pythae works and how you can train your models with this library we also provide tutorials:
makingyourown_autoencoder.ipynb shows you how to pass your own networks to the models implemented in pythae
custom_dataset.ipynb shows you how to use custom datasets with any of the models implemented in pythae
hfhubmodels_sharing.ipynb shows you how to upload and download models for the HuggingFace Hub
wandbexperimentmonitoring.ipynb shows you how to monitor you experiments using
wandbmlflowexperimentmonitoring.ipynb shows you how to monitor you experiments using
mlflowcometexperimentmonitoring.ipynb shows you how to monitor you experiments using
comet_mlmodels_training folder provides notebooks showing how to train each implemented model and how to sample from it using
pythae.samplers.scripts folder provides in particular an example of a training script to train the models on benchmark data sets (mnist, cifar10, celeba ...)
Dealing with issues 🛠️
If you are experiencing any issues while running the code or request new features/models to be implemented please open an issue on github.
Contributing 🚀
You want to contribute to this library by adding a model, a sampler or simply fix a bug ? That's awesome! Thank you! Please see CONTRIBUTING.md to follow the main contributing guidelines.
Results
Reconstruction
First let's have a look at the reconstructed samples taken from the evaluation set.
| Models | MNIST | CELEBA
|:----------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------:|
| Eval data |
| 
| AE |
|
|
| VAE |
|
| Beta-VAE|
|
| VAE Lin NF|
|
| VAE IAF|
|
| Disentangled Beta-VAE|
|
| FactorVAE|
|
| BetaTCVAE|
|
| IWAE |
|
| MSSSIMVAE |
|
| WAE|
|
| INFO VAE|
|
| VAMP |
|
|
| SVAE |
|
|
| AdversarialAE |
|
|
| VAEGAN |
|
|
| VQVAE |
|
|
| HVAE |
|
| RAEL2 |
|
| RAE_GP |
|
| Riemannian Hamiltonian VAE (RHVAE)|
| 
Generation
Here, we show the generated samples using each model implemented in the library and different samplers.
| Models | MNIST | CELEBA
|:----------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------:|
| AE + GaussianMixtureSampler |
|
|
| VAE + NormalSampler |
|
| VAE + GaussianMixtureSampler |
|
| VAE + TwoStageVAESampler |
|
| VAE + MAFSampler |
|
| Beta-VAE + NormalSampler |
|
| VAE Lin NF + NormalSampler |
|
| VAE IAF + NormalSampler |
|
| Disentangled Beta-VAE + NormalSampler |
|
| FactorVAE + NormalSampler |
|
| BetaTCVAE + NormalSampler |
|
| IWAE + Normal sampler |
|
| MSSSIMVAE + NormalSampler |
|
| WAE + NormalSampler|
|
| INFO VAE + NormalSampler|
|
| SVAE + HypershereUniformSampler |
|
|
| VAMP + VAMPSampler |
|
|
| AdversarialAE + NormalSampler |
|
|
| VAEGAN + NormalSampler |
|
|
| VQVAE + MAFSampler |
|
|
| HVAE + NormalSampler |
|
| RAEL2 + GaussianMixtureSampler |
|
| RAEGP + GaussianMixtureSampler|
|
| Riemannian Hamiltonian VAE (RHVAE) + RHVAE Sampler|
| 
Citation
If you find this work useful or use it in your research, please consider citing us
bibtex
@inproceedings{chadebec2022pythae,
author = {Chadebec, Cl\'{e}ment and Vincent, Louis and Allassonniere, Stephanie},
booktitle = {Advances in Neural Information Processing Systems},
editor = {S. Koyejo and S. Mohamed and A. Agarwal and D. Belgrave and K. Cho and A. Oh},
pages = {21575--21589},
publisher = {Curran Associates, Inc.},
title = {Pythae: Unifying Generative Autoencoders in Python - A Benchmarking Use Case},
volume = {35},
year = {2022}
}
Owner
- Login: clementchadebec
- Kind: user
- Company: INRIA
- Website: https://clementchadebec.github.io/
- Twitter: CChadebec
- Repositories: 7
- Profile: https://github.com/clementchadebec
Citation (CITATION.cff)
cff-version: 1.2.0
date-released: 2022-06
message: "If you use this software, please cite it as below."
title: "Pythae: Unifying Generative Autoencoders in Python -- A Benchmarking Use Case"
url: "https://github.com/clementchadebec/benchmark_VAE"
authors:
- family-names: Chadebec
given-names: Clément
- family-names: Vincent
given-names: Louis J.
- family-names: Allassonnière
given-names: Stéphanie
preferred-citation:
type: conference-paper
title: "Pythae: Unifying Generative Autoencoders in Python -- A Benchmarking Use Case"
authors:
- family-names: Chadebec
given-names: Clément
- family-names: Vincent
given-names: Louis J.
- family-names: Allassonnière
given-names: Stéphanie
collection-title: Advances in Neural Information Processing Systems 35
collection-type: proceedings
editors:
- family-names: Koyejo
given-names: S.
- family-names: Mohamed
given-names: S.
- family-names: Agarwal
given-names: A.
- family-names: Belgrave
given-names: D.
- family-names: Cho
given-names: K.
- family-names: Oh
given-names: A.
start: 21575
end: 21589
year: 2022
publisher:
name: Curran Associates, Inc.
url: "https://arxiv.org/abs/2206.08309"
address: "Online"
GitHub Events
Total
- Issues event: 12
- Watch event: 146
- Issue comment event: 4
- Fork event: 15
Last Year
- Issues event: 12
- Watch event: 146
- Issue comment event: 4
- Fork event: 15
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| clementchadebec | c****c@o****r | 321 |
| Soumick Chatterjee, PhD | s****k@l****m | 4 |
| Liang Hou | l****u@o****m | 2 |
| Louis J. Vincent | l****t@g****m | 2 |
| Ravi Hassanaly | 4****8 | 2 |
| Vladimir Vargas-Calderón | v****n@z****m | 2 |
| Craig Russell | c****d@g****m | 1 |
| Liam Chalcroft | l****0@u****k | 1 |
| Paul English | 3****h | 1 |
| Peter Steinbach | p****h@h****e | 1 |
| Sid Mehta | s****0@g****m | 1 |
| Sugato Ray | s****y | 1 |
| Tom Hosking | t****g@g****m | 1 |
| fbosshard | 7****d | 1 |
| nicolassalvy | 1****y | 1 |
| tbouchik | t****1@h****m | 1 |
| willxxy | 9****y | 1 |
| yjlolo | y****5@g****m | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 70
- Total pull requests: 74
- Average time to close issues: 20 days
- Average time to close pull requests: 4 days
- Total issue authors: 48
- Total pull request authors: 20
- Average comments per issue: 2.16
- Average comments per pull request: 0.61
- Merged pull requests: 65
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 7
- Pull requests: 0
- Average time to close issues: about 1 month
- Average time to close pull requests: N/A
- Issue authors: 7
- Pull request authors: 0
- Average comments per issue: 0.71
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- axu-git (4)
- clementchadebec (4)
- shikhar2333 (3)
- lyangfan (3)
- ravih18 (3)
- willxxy (2)
- ctr26 (2)
- shannjiang (2)
- ErfanMowlaei (2)
- VolodyaCO (2)
- shrave (2)
- jprachir (2)
- tomhosking (2)
- impredicative (2)
- anja-sheppard (1)
Pull Request Authors
- clementchadebec (44)
- soumickmj (5)
- ravih18 (3)
- tomhosking (2)
- willxxy (2)
- liamchalcroft (2)
- VolodyaCO (2)
- liang-hou (2)
- francescomalandrino (2)
- louis-j-vincent (2)
- fbosshard (1)
- yjlolo (1)
- paul-english (1)
- psteinb (1)
- ctr26 (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 2
-
Total downloads:
- pypi 994 last-month
-
Total dependent packages: 0
(may contain duplicates) -
Total dependent repositories: 4
(may contain duplicates) - Total versions: 21
- Total maintainers: 1
pypi.org: pythae
Unifying Generative Autoencoders in Python
- Homepage: https://github.com/clementchadebec/benchmark_VAE
- Documentation: https://pythae.readthedocs.io/
- License: Apache Software License
-
Latest release: 0.1.2
published over 2 years ago
Rankings
Maintainers (1)
conda-forge.org: pythae
This library implements some of the most common (Variational) Autoencoder models. In particular it provides the possibility to perform benchmark experiments and comparisons by training the models with the same autoencoding neural network architecture. The feature *make your own autoencoder* allows you to train any of these models with your own data and own Encoder and Decoder neural networks. PyPI: [https://pypi.org/project/pythae](https://pypi.org/project/pythae)
- Homepage: https://github.com/clementchadebec/benchmark_VAE
- License: Apache-2.0
-
Latest release: 0.0.9
published over 3 years ago
Rankings
Dependencies
- matplotlib >=3.3.2
- torchvision >=0.9.1
- cloudpickle >=2.1.0
- dataclasses >=0.6
- imageio *
- numpy >=1.19
- pickle5 *
- pydantic >=1.8.2
- scikit-learn *
- scipy >=1.7.1
- torch >=1.10.1
- tqdm *
- typing_extensions *
- cloudpickle >=2.1.0
- dataclasses >=0.6
- imageio *
- numpy >=1.19
- pydantic >=1.8.2
- scikit-learn *
- scipy >=1.7.1
- torch >=1.10.1
- tqdm *
- typing_extensions *
- actions/checkout main composite
- actions/setup-python main composite
- codecov/codecov-action v2 composite
- actions/checkout main composite
- actions/setup-python main composite
- Sphinx ==4.1.2
- sphinx-rtd-theme ==0.5.2
- sphinxcontrib-applehelp ==1.0.2
- sphinxcontrib-bibtex ==2.3.0
- sphinxcontrib-devhelp ==1.0.2
- sphinxcontrib-htmlhelp ==2.0.0
- sphinxcontrib-jsmath ==1.0.1
- sphinxcontrib-qthelp ==1.0.3
- sphinxcontrib-serializinghtml ==1.1.5