Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.0%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: renaissanceee
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Size: 4.24 MB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created over 1 year ago · Last pushed over 1 year ago
Metadata Files
Readme License Citation

README.md

BasicVSR_PlusPlus (CVPR 2022)

[Paper] [Project Page] [Code]

This is the official repository for BasicVSR++. Please feel free to raise issue related to BasicVSR++! If you are also interested in RealBasicVSR, which is also accepted to CVPR 2022, please don't hesitate to star!

Authors: Kelvin C.K. Chan, Shangchen Zhou, Xiangyu Xu, Chen Change Loy, Nanyang Technological University

Acknowedgement: Our work is built upon MMEditing. Please follow and star this repository and MMEditing!

News

  • 2 Dec 2021: Colab demo released google colab logo
  • 18 Apr 2022: Code released. Also merged into MMEditing
  • 5 Feb 2023: The checkpoints for BasicVSR_2x is released.

TODO

  • [ ] Add BasicVSR_2x architecture
  • [x] ~~Add BasicVSR_2x checkpoints~~
  • [ ] Add data processing scripts
  • [x] ~~Add checkpoints for deblur and denoise~~
  • [x] ~~Add configs for deblur and denoise~~
  • [x] ~~Add Colab demo~~

Pre-trained Weights

You can find the pre-trained weights for deblurring and denoising in this link. For super-resolution and compressed video enhancement, please refer to MMEditing.

Installation

  1. Install PyTorch
  2. pip install openmim
  3. mim install mmcv-full
  4. git clone https://github.com/ckkelvinchan/BasicVSR_PlusPlus.git
  5. cd BasicVSR_PlusPlus
  6. pip install -v -e .

Inference a Video

  1. Download pre-trained weights
  2. python demo/restoration_video_demo.py ${CONFIG} ${CHKPT} ${IN_PATH} ${OUT_PATH}

For example, you can download the VSR checkpoint here to chkpts/basicvsr_plusplus_reds4.pth, then run python demo/restoration_video_demo.py configs/basicvsr_plusplus_reds4.py chkpts/basicvsr_plusplus_reds4.pth data/demo_000 results/demo_000 You can also replace ${IN_PATH} ${OUT_PATH} by your video path (e.g., xxx/yyy.mp4) to input/output videos.

Training Models

  1. Put the dataset in the designated locations specified in the configuration file.
  2. sh tools/dist_train.sh ${CONFIG} ${NGPUS}

Data Preprocessing

To be added...

Related Work

Our BasicVSR series: 1. BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond, CVPR 2021 2. Investigating Tradeoffs in Real-World Video Super-Resolution, CVPR 2022

More about deformable alignment: - Understanding Deformable Alignment in Video Super-Resolution, AAAI 2021

Citations

@inproceedings{chan2022basicvsrpp, author = {Chan, Kelvin C.K. and Zhou, Shangchen and Xu, Xiangyu and Loy, Chen Change}, title = {{BasicVSR++}: Improving video super-resolution with enhanced propagation and alignment}, booktitle = {IEEE Conference on Computer Vision and Pattern Recognition}, year = {2022} } @article{chan2022generalization, title={On the Generalization of {BasicVSR++} to Video Deblurring and Denoising}, author={Chan, Kelvin CK and Zhou, Shangchen and Xu, Xiangyu and Loy, Chen Change}, journal={arXiv preprint arXiv:2204.05308}, year={2022} }

Owner

  • Login: renaissanceee
  • Kind: user

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - family-names: MMEditing
    given-names: Contributors
title: "MMEditing: OpenMMLab Image and Video Editing Toolbox"
version: 0.13.0
date-released: 2022-03-01
url: "https://github.com/open-mmlab/mmediting"
license: Apache-2.0

GitHub Events

Total
Last Year

Dependencies

.github/workflows/build-windows.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
  • codecov/codecov-action v2 composite
.github/workflows/build.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
  • codecov/codecov-action v2 composite
.github/workflows/lint.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/publish-to-pypi.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v1 composite
docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
requirements/docs.txt pypi
  • docutils ==0.16.0
  • mmcls ==0.10.0
  • myst_parser *
  • sphinx ==4.0.2
  • sphinx-copybutton *
  • sphinx_markdown_tables *
requirements/readthedocs.txt pypi
  • lmdb *
  • mmcv *
  • regex *
  • scikit-image *
  • titlecase *
  • torch *
  • torchvision *
requirements/runtime.txt pypi
  • Pillow *
  • av ==8.0.3
  • av *
  • facexlib *
  • lmdb *
  • mmcv-full >=1.3.13
  • numpy *
  • opencv-python <=4.5.4.60
  • tensorboard *
  • torch *
  • torchvision *
requirements/tests.txt pypi
  • codecov * test
  • flake8 * test
  • interrogate * test
  • isort ==5.10.1 test
  • onnxruntime * test
  • pytest * test
  • pytest-runner * test
  • yapf * test
requirements.txt pypi
setup.py pypi