Science Score: 36.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
✓Committers with academic emails
2 of 4 committers (50.0%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (15.2%) to scientific vocabulary
Keywords
Repository
A general framework for video prediction in PyTorch.
Basic Info
- Host: GitHub
- Owner: AIS-Bonn
- License: mit
- Language: Python
- Default Branch: main
- Homepage: https://ais-bonn.github.io/vp-suite/
- Size: 47.5 MB
Statistics
- Stars: 25
- Watchers: 2
- Forks: 9
- Open Issues: 8
- Releases: 7
Topics
Metadata Files
README.md
Introduction
Video prediction ('VP') is the task of predicting future frames given some context frames.
Like with most Computer Vision sub-domains, scientific contributions in this field exhibit a high variance in the following aspects: - Training protocol (dataset usage, when to backprop, value ranges etc.) - Technical details of model implementation (deep learning framework, package dependencies etc.) - Benchmark selection and execution (this includes the choice of dataset, number of context/predicted frames, skipping frames in the observed sequences etc.) - Evaluation protocol (metrics chosen, variations in implementation/reduction modes, different ways of creating visualizations etc.)
Furthermore, while many contributors nowadays do share their code, seemingly minor missing parts such as dataloaders etc. make it much harder to assess, compare and improve existing models.
This repo aims at providing a suite that facilitates scientific work in the subfield, providing standardized yet customizable solutions for the aspects mentioned above. This way, validating existing VP models and creating new ones hopefully becomes much less tedious.
Installation
Requires pip and python >= 3.6 (code is tested with version 3.8).
From PyPi
``` pip install vp-suite ```From source
``` pip install git+https://github.com/Flunzmas/vp-suite.git ```If you want to contribute
``` git clone https://github.com/Flunzmas/vp-suite.git cd vp-suite pip install -e .[dev] ```If you want to build docs
``` git clone https://github.com/Flunzmas/vp-suite.git cd vp-suite pip install -e .[doc] ```Note: If using zsh while installing the package with extras,
be sure to quote the arguments like i.e. like this: pip install -e '.[dev]'.
Usage
Changing save location
When using this package for the first time, the save location for datasets, models and logs is set to `Training models
```python from vp_suite import VPSuite # 1. Set up the VP Suite. suite = VPSuite() # 2. Load one of the provided datasets. # They will be downloaded automatically if no downloaded data is found. suite.load_dataset("MM") # load moving MNIST dataset from default location # 3. Create a video prediction model. suite.create_model('convlstm-shi') # create a ConvLSTM-Based Prediction Model. # 4. Run the training loop, optionally providing custom configuration. suite.train(lr=2e-4, epochs=100) ``` This code snippet will train the model, log training progress to your [Weights & Biases](https://wandb.ai) account, save model checkpoints on improvement and generate and save prediction visualizations.Evaluating models
```python from vp_suite import VPSuite # 1. Set up the VP Suite. suite = VPSuite() # 2. Load one of the provided datasets in test mode. # They will be downloaded automatically if no downloaded data is found. suite.load_dataset("MM", split="test") # load moving MNIST dataset from default location # 3. Get the filepaths to the models you'd like to test and load the models model_dirs = ["out/model_foo/", "out/model_bar/"] for model_dir in model_dirs: suite.load_model(model_dir, ckpt_name="best_model.pth") # 4. Test the loaded models on the loaded test sets. suite.test(context_frames=5, pred_frames=10) ``` This code will evaluate the loaded models on the loaded dataset (its test portion, if avaliable), creating detailed summaries of prediction performance across a customizable set of metrics. The results as well as prediction visualizations are saved and logged to [Weights & Biases](https://wandb.ai). _Note 1: If the specified evaluation protocol or the loaded dataset is incompatible with one of the models, this will raise an error with an explanation._ _Note 2: By default, a [CopyLastFrame](https://github.com/AIS-Bonn/vp-suite/blob/main/vp_suite/models/model_copy_last_frame.py) baseline is also loaded and tested with the other models._Hyperparameter Optimization
This package uses [optuna](https://github.com/optuna/optuna) to provide hyperparameter optimization functionalities. The following snippet provides a full example: ```python import json from vp_suite import VPSuite from vp_suite.defaults import SETTINGS suite = VPSuite() suite.load_dataset(dataset="KTH") # select dataset of choice suite.create_model(model_id="lstm") # select model of choice with open(str((SETTINGS.PKG_RESOURCES / "optuna_example_config.json").resolve()), 'r') as cfg_file: optuna_cfg = json.load(cfg_file) # optuna_cfg specifies the parameters' search intervals and scales; modify as you wish. suite.hyperopt(optuna_cfg, n_trials=30, epochs=10) ``` This code e.g. will run 30 training loops (called _trials_ by optuna), producing a trained model for each hyperparameter configuration and writing the hyperparameter configuration of the best performing run to the console. _Note 1: For hyperopt, visualization, logging and model checkpointing is minimized to reduce IO strain._ _Note 2: Despite optuna's trial pruning capabilities, running a high number of trials might still take a lot of time. In that case, consider e.g. reducing the number of training epochs._ Use `no_wandb=True`/`no_vis=True` if you want to log outputs to the console instead/not generate and save visualizations.Notes:
- Use
VPSuite.list_available_models()andVPSuite.list_available_datasets()to get an overview of which models and datasets are currently covered by the framework. - All training, testing and hyperparametrization calls can be heavily configured (adjusting training hyperparameters, logging behavior etc, ...).
For a comprehensive list of all adjustable run configuration parameters see the documentation of the
vp_suite.defaultspackage.
Customization
This package is designed with quick extensibility in mind. See the sections below for how to add new components (models, model blocks, datasets or measures).
New Models
1. Create a file `New Model Blocks
1. Create a file `New Datasets
1. Create a file `New Measures (losses and/or metrics)
1. Create a new file `Notes:
- If you omit the docstring for a particular attribute/method/field, the docstring of the base class is used for documentation.
- If implementing components that originate from publications/public repositories, please override the corresponding constants to specify the source!
Additionally, if you want to write automated tests checking implementation equality,
have a look at how
tests/test_impl_match.pyfetches the tests oftests/test_impl_match/and executes these tests. - Basic unit tests for models, datasets and measures are executed on all registered models - you don't need to write such basic tests for your custom components! Same applies for documentation: The tables that list available components are filled automatically.
Contributing
This project is always open to extension! It grows especially powerful with more models and datasets, so if you've made your code work on custom models/datasets/metrics/etc., feel free to submit a merge request!
Other kinds of contributions are also very welcome - just check the open issues on the tracker or open up a new issue there.
Unit Testing
When submitting a merge request, please make sure all tests run through (execute from root folder):
python -m pytest --runslow --cov=vp_suite -rs
Note: this is the easiest way to run all tests without import hassles.
You will need to have vp-suite installed in development mode, though (see here).
API Documentation
The official API documentation is updated automatically upon push to the main branch.
If you want to build the documentation locally, make sure you've installed the package accordingly
and execute the following:
cd docs/
bash assemble_docs.sh
Citing
Please consider citing if you find our findings or our repository helpful.
@article{karapetyan_VideoPrediction_2022,
title = {Video Prediction at Multiple Scales with Hierarchical Recurrent Networks},
author = {Karapetyan, Ani and Villar-Corrales, Angel and Boltres, Andreas and Behnke, Sven},
journal={arXiv preprint arXiv:2203.09303},
year={2022}
}
Acknowledgements
- Project structure is inspired by segmentation_models.pytorch.
- Sphinx-autodoc templates are inspired by the QNET repository.
All other sources are acknowledged in the documentation of the respective point of usage (to the best of our knowledge).
License
This project comes with an MIT License, except for the following components:
- Module
vp_suite.measure.fvd.pytorch_i3d(Apache 2.0 License, taken and modified from here)
Disclaimer
I do not host or distribute any dataset. For all provided dataset functionality, I trust you have the permission to download and use the respective data.
Owner
- Name: AIS Bonn
- Login: AIS-Bonn
- Kind: organization
- Location: University of Bonn
- Website: http://ais.uni-bonn.de
- Repositories: 59
- Profile: https://github.com/AIS-Bonn
Autonomous Intelligent Systems Group
GitHub Events
Total
- Watch event: 1
Last Year
- Watch event: 1
Committers
Last synced: almost 3 years ago
All Time
- Total Commits: 343
- Total Committers: 4
- Avg Commits per committer: 85.75
- Development Distribution Score (DDS): 0.032
Top Committers
| Name | Commits | |
|---|---|---|
| Andreas Boltres | b****s@a****e | 332 |
| Andreas Boltres | a****s@p****e | 8 |
| Andreas Boltres | a****s@u****e | 2 |
| dependabot[bot] | 4****]@u****m | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 33
- Total pull requests: 1
- Average time to close issues: 6 days
- Average time to close pull requests: 15 minutes
- Total issue authors: 4
- Total pull request authors: 1
- Average comments per issue: 0.12
- Average comments per pull request: 0.0
- Merged pull requests: 1
- Bot issues: 0
- Bot pull requests: 1
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- Flunzmas (30)
- angelvillar96 (1)
- Abhimanyu8713 (1)
- JoeGardner000 (1)
Pull Request Authors
- dependabot[bot] (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 17 last-month
- Total dependent packages: 0
- Total dependent repositories: 1
- Total versions: 4
- Total maintainers: 1
pypi.org: vp-suite
A Framework for Training and Evaluating Video Prediction Models
- Homepage: https://ais-bonn.github.io/vp-suite/
- Documentation: https://ais-bonn.github.io/vp-suite/
- License: MIT
-
Latest release: 0.0.9
published almost 4 years ago
Rankings
Maintainers (1)
Dependencies
- m2r2 *
- sphinx ==4.4.0
- sphinx-autodoc-typehints *
- sphinx-rtd-theme *
- tabulate *
- Pillow ==9.0.1
- imageio ==2.13.4
- matplotlib ==3.5.1
- moviepy ==1.0.3
- numpy ==1.21.5
- opencv_python ==4.5.5.64
- optuna ==2.10.0
- piqa ==1.1.7
- pytest ==6.2.5
- scipy ==1.7.3
- setuptools ==60.3.1
- tfrecord ==1.14.1
- torch ==1.10.1
- torchfile ==0.1.0
- torchvision ==0.11.2
- tqdm ==4.62.3
- wandb ==0.12.9
- gitpython * development
- pytest * development
- pytest-cov * development
- sklearn * development