Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (8.6%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: NeonLeexiang
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Size: 10.1 MB
Statistics
  • Stars: 90
  • Watchers: 6
  • Forks: 8
  • Open Issues: 17
  • Releases: 0
Created about 3 years ago · Last pushed about 1 year ago
Metadata Files
Readme License Citation

README.md

DLGSANet: Lightweight Dynamic Local and Global Self-Attention Networks for Image Super-Resolution

Project Page | Paper (ArXiv) | Supplemental Material

This repository is the official pytorch implementation of our paper, DLGSANet: Lightweight Dynamic Local and Global Self-Attention Networks for Image Super-Resolution.

Xiang Li1, Jinshan Pan1, Jinhui Tang1, Jiangxin Dong1

1IMAG Lab, Nanjing University of Science and Technology

Abstract: We propose an effective lightweight dynamic local and global self-attention network (DLGSANet) to solve image super-resolution. Our method explores the properties of Transformers while having low computational costs. Motivated by the network designs of Transformers, we develop a simple yet effective multi-head dynamic local self-attention (MHDLSA) module to extract local features efficiently. In addition, we note that existing Transformers usually explore all similarities of the tokens between the queries and keys for the feature aggregation. However, not all the tokens from the queries are relevant to those in keys, using all the similarities does not effectively facilitate the high-resolution image reconstruction. To overcome this problem, we develop a sparse global self-attention (SparseGSA) module to select the most useful similarity values so that the most useful global features can be better utilized for the high-resolution image reconstruction. We develop a hybrid dynamic-Transformer block(HDTB) that integrates the MHDLSA and SparseGSA for both local and global feature exploration. To ease the network training, we formulate the HDTBs into a residual hybrid dynamic-Transformer group (RHDTG). By embedding the RHDTGs into an end-to-end trainable network, we show that our proposed method has fewer network parameters and lower computational costs while achieving competitive performance against state-of-the-art ones in terms of accuracy.

Framework


Contents

The contents of this repository are as follows:

  1. Dependencies
  2. Train
  3. Test

Dependencies

  • Python
  • Pytorch (1.11 or 1.13)
  • basicsr
  • cupy-cuda

extra infos:

For more details of the dependencies, please refer to requirements.txt

Train

```

For X2

sh ./demosbatchfile/SISRClassicDIV2K/trainSISRClassicDIV2KLarge90C6G4BDLGSANetSRx2scratchimgsize48lr5e_4.sh

For X3

sh ./demosbatchfile/SISRClassicDIV2K/trainSISRClassicDIV2KLarge90C6G4BDLGSANetSRx3scratchimgsize48lr5e_4.sh

For X4

sh ./demosbatchfile/SISRClassicDIV2K/trainSISRClassicDIV2KLarge90C6G4BDLGSANetSRx4scratchimgsize48lr5e_4.sh ```

Test

```

For X2

sh ./demosbatchfile/SISRClassicDIV2K/testSISRClassicDIV2KLarge90C6G4BDLGSANetSRx2scratchimgsize48lr5e_4.sh

For X3

sh ./demosbatchfile/SISRClassicDIV2K/testSISRClassicDIV2KLarge90C6G4BDLGSANetSRx3scratchimgsize48lr5e_4.sh

For X4

sh ./demosbatchfile/SISRClassicDIV2K/testSISRClassicDIV2KLarge90C6G4BDLGSANetSRx4scratchimgsize48lr5e_4.sh

```


Results

  • Pretrained models and visual results

| Degradation | Model Zoo | Visual Results | | :----- |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | BI-Efficient SR | Google Drive | Google Drive | | BI-Classic SR | To-Do | To-Do | | BI-Classic SR (x4) | Google Drive / Baidu Netdisk code:IMAG | Google Drive / Baidu Netdisk code:IMAG |

  • Lightweight models PSNR

Unfortunately, the lightweight models' pretrain models were lost. Additionally, the github project has not been updated for a year due to personal reasons. Recently, I rebuilt the framework and used a single 3090GPU to retrain the lightweight models using the same training settings as the paper (16 batchsize). The re-trained model and those in the study have a slightly varied PSNR (within 0.05dB) due to differences in devices and versions of Pytorch/Cuda. New pre-train models and visual results will be added on Baidu Netdisk and Google Drive. We recommend either training on your own for research purposes or using the new data from re-trained lightweight models.

DLGSANet-Tiny:

| model-scale | Set5 | Set14 | BSDS100 | Urban100 | Manga109 | |:-----------------|:------------------:|:------------------:|:------------------:|:------------------:|:-------------------:| | Tiny-x2(paper) | 38.16 / 0.9611 | 33.92 / 0.9202 | 32.26 / 0.9007 | 32.82 / 0.9343 | 39.14 / 0.9777 | | Tiny-x2 | 38.1581 / 0.9615 | 33.8906 / 0.9200 | 32.2828 / 0.9017 | 32.8461 / 0.9343 | 39.1326 / 0.9780 | | Tiny-x3(paper) | 34.63 / 0.9288 | 30.57 / 0.8459 | 29.21 / 0.8083 | 28.69 / 0.8630 | 34.10 / 0.9480 | | Tiny-x3 | 34.6197 / 0.9293 | 30.5370 / 0.8469 | 29.2335 / 0.8100 | 28.7829 / 0.8645 | 34.0463 / 0.9477 | | Tiny-x4(paper) | 32.46 / 0.8984 | 28.79 / 0.7861 | 27.70 / 0.7408 | 26.55 / 0.8002 | 30.98 / 0.9137 | | Tiny-x4 | 32.4957 / 0.8992 | 28.7738 / 0.7862 | 27.7217 / 0.7426 | 26.5675 / 0.8006 | 30.9556 / 0.9142 |

DLGSANet-Light:

| model-scale | Set5 | Set14 | BSDS100 | Urban100 | Manga109 | |:-----------------|:------------------:|:------------------:|:------------------:|:------------------:|:-------------------:| | Light-x2(paper) | 38.20 / 0.9612 | 33.89 / 0.9203 | 32.30 / 0.9012 | 32.94 / 0.9355 | 39.29 / 0.9780 | | Light-x2 | 38.1577 / 0.9615 | 34.0453 / 0.9216 | 32.3058 / 0.9020 | 32.9323 / 0.9354 | 39.1995 / 0.9780 | | Light-x3(paper) | 34.70 / 0.9295 | 30.58 / 0.8465 | 29.24 / 0.8089 | 28.83 / 0.8653 | 34.16 / 0.9483 | | Light-x3 | 34.6697 / 0.9298 | 30.5621 / 0.8466 | 29.2484 / 0.8101 | 28.8239 / 0.8655 | 34.1938 / 0.9483 | | Light-x4(paper) | 32.54 / 0.8993 | 28.84 / 0.7871 | 27.73 / 0.7415 | 26.66 / 0.8033 | 31.13 / 0.9161 | | Light-x4 | 32.5333 / 0.8998 | 28.6401 / 0.7864 | 27.7299 / 0.7434 | 26.6702 / 0.8036 | 31.0196 / 0.9154 |

Visual Results


To Do

Release pre-trained models of regular models

Release the visual results of BI super-resolution

Citation

If this work is helpful for your research, please consider citing the following BibTeX entry. @article{li2023dlgsanet, title={DLGSANet: Lightweight Dynamic Local and Global Self-Attention Networks for Image Super-Resolution}, author={Li, Xiang and Pan, Jinshan and Tang, Jinhui and Dong, Jiangxin}, journal={arXiv preprint arXiv:2301.02031}, year={2023}, }

Acknowledgement

The foundation for the training process is BasicSR , which profited from the outstanding contribution of XPixelGroup .

The following research forms the foundation for the MHDLSA implementation:

On the Connection between Local Attention and Dynamic Depth-wise Convolution paper github

And the following research forms the foundation for the SparseGSA implementation:

Restormer: Efficient Transformer for High-Resolution Image Restoration paper github

Improving Image Restoration by Revisiting Global Information Aggregation paper github

Contact

This repo is currently maintained by Xiang Li (@neonleexiang) and is for academic research use only.

Owner

  • Name: Leexiang
  • Login: NeonLeexiang
  • Kind: user

return neonleexiang

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this project, please cite it as below."
title: "BasicSR: Open Source Image and Video Restoration Toolbox"
version: 1.3.5
date-released: 2022-02-16
url: "https://github.com/XPixelGroup/BasicSR"
license: Apache-2.0
authors:
  - family-names: Wang
    given-names: Xintao
  - family-names: Xie
    given-names: Liangbin
  - family-names: Yu
    given-names: Ke
  - family-names: Chan
    given-names: Kelvin C.K.
  - family-names: Loy
    given-names: Chen Change
  - family-names: Dong
    given-names: Chao

GitHub Events

Total
  • Issues event: 1
  • Watch event: 6
  • Issue comment event: 6
  • Push event: 1
Last Year
  • Issues event: 1
  • Watch event: 6
  • Issue comment event: 6
  • Push event: 1

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 21
  • Total pull requests: 0
  • Average time to close issues: 18 days
  • Average time to close pull requests: N/A
  • Total issue authors: 18
  • Total pull request authors: 0
  • Average comments per issue: 1.29
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 1
  • Pull request authors: 0
  • Average comments per issue: 0.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • 1016cxx (2)
  • JiangYun77 (2)
  • BeauGeogeo (2)
  • Lucien66 (1)
  • cenchaojun (1)
  • Thothene (1)
  • FlotingDream (1)
  • ronjonxu (1)
  • sahleywuyi (1)
  • dslisleedh (1)
  • sairights (1)
  • simplew2011 (1)
  • BlueBabyYoda46 (1)
  • ButoneDream (1)
  • shenyongkun (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

requirements.txt pypi
  • absl-py ==1.1.0
  • addict ==2.4.0
  • blessed ==1.19.1
  • cachetools ==5.2.0
  • certifi ==2022.6.15
  • charset-normalizer ==2.1.1
  • click ==8.1.3
  • docker-pycreds ==0.4.0
  • einops ==0.4.1
  • fastrlock ==0.8
  • future ==0.18.2
  • gitdb ==4.0.9
  • gitpython ==3.1.27
  • google-auth ==2.11.0
  • google-auth-oauthlib ==0.4.6
  • gpustat ==1.0.0
  • grpcio ==1.48.0
  • idna ==3.4
  • imageio ==2.21.3
  • importlib-metadata ==4.12.0
  • lmdb ==1.3.0
  • markdown ==3.4.1
  • networkx ==2.8.6
  • numpy ==1.23.3
  • nvidia-ml-py ==11.495.46
  • oauthlib ==3.2.1
  • opencv-python ==4.6.0.66
  • packaging ==21.3
  • pathtools ==0.1.2
  • pillow ==9.2.0
  • pip ==22.1.2
  • promise ==2.3
  • protobuf ==3.19.5
  • psutil ==5.9.2
  • pyasn1 ==0.4.8
  • pyasn1-modules ==0.2.8
  • pyparsing ==3.0.9
  • pywavelets ==1.3.0
  • requests ==2.28.1
  • requests-oauthlib ==1.3.1
  • rsa ==4.8
  • scikit-image ==0.19.3
  • scipy ==1.9.1
  • sentry-sdk ==1.9.8
  • setproctitle ==1.2.3
  • setuptools ==63.4.1
  • shortuuid ==1.0.9
  • six ==1.16.0
  • smmap ==5.0.0
  • tb-nightly ==2.11.0a20220915
  • tensorboard-data-server ==0.6.1
  • tensorboard-plugin-wit ==1.8.1
  • tifffile ==2022.5.4
  • tqdm ==4.64.1
  • typing-extensions ==4.3.0
  • urllib3 ==1.26.12
  • wandb ==0.13.3
  • wcwidth ==0.2.5
  • werkzeug ==2.1.2
  • wheel ==0.37.1
  • yapf ==0.32.0
  • zipp ==3.8.1
setup.py pypi