msdan

Lightweight multi-scale distillation attention network for image super-resolution (KBS 2024)

https://github.com/supereeeee/msdan

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: sciencedirect.com
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.3%) to scientific vocabulary

Keywords

lightweight super-resolution
Last synced: 6 months ago · JSON representation ·

Repository

Lightweight multi-scale distillation attention network for image super-resolution (KBS 2024)

Basic Info
  • Host: GitHub
  • Owner: Supereeeee
  • Language: Python
  • Default Branch: master
  • Homepage:
  • Size: 18.2 MB
Statistics
  • Stars: 10
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
lightweight super-resolution
Created almost 2 years ago · Last pushed 7 months ago
Metadata Files
Readme License Citation

README.md

Lightweight multi-scale distillation attention network for image super-resolution [paper link]

Environment in our experiments

[python 3.8]

[Ubuntu 20.04]

BasicSR 1.4.2

PyTorch 1.13.0, Torchvision 0.14.0, Cuda 11.7

Installation

git clone https://github.com/Supereeeee/MSDAN.git pip install -r requirements.txt python setup.py develop

How To Test

· Refer to ./options/test for the configuration file of the model to be tested and prepare the testing data.

· The pre-trained models have been palced in ./experiments/pretrained_models/

· Then run the follwing codes (taking MSDAN_x4.pth as an example):

python basicsr/test.py -opt options/test/test_MSDAN_x4.yml The testing results will be saved in the ./results folder.

How To Train

· Refer to ./options/train for the configuration file of the model to train.

· Preparation of training data can refer to this page. All datasets can be downloaded at the official website.

· Note that the default training dataset is based on lmdb, refer to docs in BasicSR to learn how to generate the training datasets.

· The training command is like
python basicsr/train.py -opt options/train/train_MSDAN_x4.yml For more training commands and details, please check the docs in BasicSR

Model Complexity

· The network structure of MSDAN is palced at ./basicsr/archs/MSDAN_arch.py

· We adopt thop tool to calculate model complexity, see ./basicsr/archs/model_complexity.py

Inference time

· We test the inference time on multiple benchmark datasets on a 140W fully powered 3060 laptop.

· You can run ./inference/inference_MSDAN.py on your decive.

Acknowledgement

This code is based on BasicSR toolbox. Thanks for the awesome work.

Contact

If you have any question, please email 1051823707@qq.com.

Owner

  • Name: Quanwei
  • Login: Supereeeee
  • Kind: user

Bittersweet.

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this project, please cite it as below."
title: "BasicSR: Open Source Image and Video Restoration Toolbox"
version: 1.3.5
date-released: 2022-02-16
url: "https://github.com/XPixelGroup/BasicSR"
license: Apache-2.0
authors:
  - family-names: Wang
    given-names: Xintao
  - family-names: Xie
    given-names: Liangbin
  - family-names: Yu
    given-names: Ke
  - family-names: Chan
    given-names: Kelvin C.K.
  - family-names: Loy
    given-names: Chen Change
  - family-names: Dong
    given-names: Chao

GitHub Events

Total
  • Issues event: 1
  • Watch event: 1
  • Push event: 1
  • Fork event: 1
Last Year
  • Issues event: 1
  • Watch event: 1
  • Push event: 1
  • Fork event: 1

Dependencies

.github/workflows/publish-pip.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v1 composite
  • pypa/gh-action-pypi-publish master composite
.github/workflows/pylint.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/release.yml actions
  • actions/checkout v2 composite
  • actions/create-release v1 composite
docs/requirements.txt pypi
  • Pillow *
  • addict *
  • future *
  • lmdb *
  • numpy *
  • opencv-python *
  • pyyaml *
  • recommonmark *
  • requests *
  • scikit-image *
  • scipy *
  • sphinx *
  • sphinx_intl *
  • sphinx_markdown_tables *
  • sphinx_rtd_theme *
  • tb-nightly *
  • torch >=1.7
  • torchvision *
  • tqdm *
  • yapf *
requirements.txt pypi
  • Pillow *
  • addict *
  • future *
  • lmdb *
  • numpy >=1.17
  • opencv-python *
  • pyyaml *
  • requests *
  • scikit-image *
  • scipy *
  • tb-nightly *
  • torch >=1.7
  • torchvision *
  • tqdm *
  • yapf *
setup.py pypi