Science Score: 67.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 5 DOI reference(s) in README -
✓Academic publication links
Links to: arxiv.org, zenodo.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (8.4%) to scientific vocabulary
Keywords
Repository
Super Resolution datasets and models in Pytorch
Basic Info
Statistics
- Stars: 206
- Watchers: 7
- Forks: 21
- Open Issues: 1
- Releases: 5
Topics
Metadata Files
README.md
Super-Resolution Networks for Pytorch
Super-resolution is a process that increases the resolution of an image, adding additional details. Methods using neural networks give the most accurate results, much better than other interpolation methods. With the right training, it is even possible to make photo-realistic images.
For example, here is a low-resolution image, magnified x4 by a neural network, and a high resolution image of the same object:

In this repository, you will find: * the popular super-resolution networks, pretrained * common super-resolution datasets * Pytorch datasets and transforms adapted to super-resolution * a unified training script for all models
Models
The following pretrained models are available. Click on the links for the paper: * EDSR * CARN * RDN * RCAN * NinaSR
Newer and larger models perform better: the most accurate models are EDSR (huge), RCAN and NinaSR-B2. For practical applications, I recommend a smaller model, such as NinaSR-B1.
Expand benchmark results
Set5 results
| Network | Parameters (M) | 2x (PSNR/SSIM) | 3x (PSNR/SSIM) | 4x (PSNR/SSIM) | | ------------------- | -------------- | -------------- | -------------- | -------------- | | carn | 1.59 | 37.88 / 0.9600 | 34.32 / 0.9265 | 32.14 / 0.8942 | | carn\_m | 0.41 | 37.68 / 0.9594 | 34.06 / 0.9247 | 31.88 / 0.8907 | | edsr\_baseline | 1.37 | 37.98 / 0.9604 | 34.37 / 0.9270 | 32.09 / 0.8936 | | edsr | 40.7 | 38.19 / 0.9609 | 34.68 / 0.9293 | 32.48 / 0.8985 | | ninasr\_b0 | 0.10 | 37.72 / 0.9594 | 33.96 / 0.9234 | 31.77 / 0.8877 | | ninasr\_b1 | 1.02 | 38.14 / 0.9609 | 34.48 / 0.9277 | 32.28 / 0.8955 | | ninasr\_b2 | 10.0 | 38.21 / 0.9612 | 34.61 / 0.9288 | 32.45 / 0.8973 | | rcan | 15.4 | 38.27 / 0.9614 | 34.76 / 0.9299 | 32.64 / 0.9000 | | rdn | 22.1 | 38.12 / 0.9609 | 33.98 / 0.9234 | 32.35 / 0.8968 |Set14 results
| Network | Parameters (M) | 2x (PSNR/SSIM) | 3x (PSNR/SSIM) | 4x (PSNR/SSIM) | | ------------------- | -------------- | -------------- | -------------- | -------------- | | carn | 1.59 | 33.57 / 0.9173 | 30.30 / 0.8412 | 28.61 / 0.7806 | | carn\_m | 0.41 | 33.30 / 0.9151 | 30.10 / 0.8374 | 28.42 / 0.7764 | | edsr\_baseline | 1.37 | 33.57 / 0.9174 | 30.28 / 0.8414 | 28.58 / 0.7804 | | edsr | 40.7 | 33.95 / 0.9201 | 30.53 / 0.8464 | 28.81 / 0.7872 | | ninasr\_b0 | 0.10 | 33.24 / 0.9144 | 30.02 / 0.8355 | 28.28 / 0.7727 | | ninasr\_b1 | 1.02 | 33.71 / 0.9189 | 30.41 / 0.8437 | 28.71 / 0.7840 | | ninasr\_b2 | 10.0 | 34.00 / 0.9206 | 30.53 / 0.8461 | 28.80 / 0.7863 | | rcan | 15.4 | 34.13 / 0.9216 | 30.63 / 0.8475 | 28.85 / 0.7878 | | rdn | 22.1 | 33.71 / 0.9182 | 30.07 / 0.8373 | 28.72 / 0.7846 |DIV2K results (validation set)
| Network | Parameters (M) | 2x (PSNR/SSIM) | 3x (PSNR/SSIM) | 4x (PSNR/SSIM) | 8x (PSNR/SSIM) | | ------------------- | -------------- | -------------- | -------------- | -------------- | -------------- | | carn | 1.59 | 36.08 / 0.9451 | 32.37 / 0.8871 | 30.43 / 0.8366 | N/A | | carn\_m | 0.41 | 35.76 / 0.9429 | 32.09 / 0.8827 | 30.18 / 0.8313 | N/A | | edsr\_baseline | 1.37 | 36.13 / 0.9455 | 32.41 / 0.8878 | 30.43 / 0.8370 | N/A | | edsr | 40.7 | 36.56 / 0.9485 | 32.75 / 0.8933 | 30.73 / 0.8445 | N/A | | ninasr\_b0 | 0.10 | 35.77 / 0.9428 | 32.06 / 0.8818 | 30.09 / 0.8293 | 26.60 / 0.7084 | | ninasr\_b1 | 1.02 | 36.35 / 0.9471 | 32.51 / 0.8892 | 30.56 / 0.8405 | 26.96 / 0.7207 | | ninasr\_b2 | 10.0 | 36.52 / 0.9482 | 32.73 / 0.8926 | 30.73 / 0.8437 | 27.07 / 0.7246 | | rcan | 15.4 | 36.61 / 0.9489 | 32.78 / 0.8935 | 30.73 / 0.8447 | 27.17 / 0.7292 | | rdn | 22.1 | 36.32 / 0.9468 | 32.04 / 0.8822 | 30.61 / 0.8414 | N/A |B100 results
| Network | Parameters (M) | 2x (PSNR/SSIM) | 3x (PSNR/SSIM) | 4x (PSNR/SSIM) | | ------------------- | -------------- | -------------- | -------------- | -------------- | | carn | 1.59 | 32.12 / 0.8986 | 29.07 / 0.8042 | 27.58 / 0.7355 | | carn\_m | 0.41 | 31.97 / 0.8971 | 28.94 / 0.8010 | 27.45 / 0.7312 | | edsr\_baseline | 1.37 | 32.15 / 0.8993 | 29.08 / 0.8051 | 27.56 / 0.7354 | | edsr | 40.7 | 32.35 / 0.9019 | 29.26 / 0.8096 | 27.72 / 0.7419 | | ninasr\_b0 | 0.10 | 31.97 / 0.8974 | 28.90 / 0.8000 | 27.36 / 0.7290 | | ninasr\_b1 | 1.02 | 32.24 / 0.9004 | 29.13 / 0.8061 | 27.62 / 0.7377 | | ninasr\_b2 | 10.0 | 32.32 / 0.9014 | 29.23 / 0.8087 | 27.71 / 0.7407 | | rcan | 15.4 | 32.39 / 0.9024 | 29.30 / 0.8106 | 27.74 / 0.7429 | | rdn | 22.1 | 32.25 / 0.9006 | 28.90 / 0.8004 | 27.66 / 0.7388 |Urban100 results
| Network | Parameters (M) | 2x (PSNR/SSIM) | 3x (PSNR/SSIM) | 4x (PSNR/SSIM) | | ------------------- | -------------- | -------------- | -------------- | -------------- | | carn | 1.59 | 31.95 / 0.9263 | 28.07 / 0.849 | 26.07 / 0.78349 | | carn\_m | 0.41 | 31.30 / 0.9200 | 27.57 / 0.839 | 25.64 / 0.76961 | | edsr\_baseline | 1.37 | 31.98 / 0.9271 | 28.15 / 0.852 | 26.03 / 0.78424 | | edsr | 40.7 | 32.97 / 0.9358 | 28.81 / 0.865 | 26.65 / 0.80328 | | ninasr\_b0 | 0.10 | 31.33 / 0.9204 | 27.48 / 0.8374 | 25.45 / 0.7645 | | ninasr\_b1 | 1.02 | 32.48 / 0.9319 | 28.29 / 0.8555 | 26.25 / 0.7914 | | ninasr\_b2 | 10.0 | 32.91 / 0.9354 | 28.70 / 0.8640 | 26.54 / 0.8008 | | rcan | 15.4 | 33.19 / 0.9372 | 29.01 / 0.868 | 26.75 / 0.80624 | | rdn | 22.1 | 32.41 / 0.9310 | 27.49 / 0.838 | 26.36 / 0.79460 |All models are defined in torchsr.models. Other useful tools to augment your models, such as self-ensemble methods and tiling, are present in torchsr.models.utils.
Datasets
The following datasets are available. Click on the links for the project page: * DIV2K * RealSR * Flicr2K * REDS * Set5, Set14, B100, Urban100
All datasets are defined in torchsr.datasets. They return a list of images, with the high-resolution image followed by downscaled or degraded versions.
Data augmentation methods are provided in torchsr.transforms.
Datasets are downloaded automatically when using the download=True flag, or by running the corresponding script i.e. ./scripts/download_div2k.sh.
Usage
```python from torchsr.datasets import Div2K from torchsr.models import ninasrb0 from torchvision.transforms.functional import topilimage, totensor
Div2K dataset
dataset = Div2K(root="./data", scale=2, download=False)
Get the first image in the dataset (High-Res and Low-Res)
hr, lr = dataset[0]
Download a pretrained NinaSR model
model = ninasr_b0(scale=2, pretrained=True)
Run the Super-Resolution model
lrt = totensor(lr).unsqueeze(0) srt = model(lrt) sr = topilimage(sr_t.squeeze(0).clamp(0, 1)) sr.show() ```
Expand more examples
```python from torchsr.datasets import Div2K from torchsr.models import edsr, rcan from torchsr.models.utils import ChoppedModel, SelfEnsembleModel from torchsr.transforms import ColorJitter, Compose, RandomCrop # Div2K dataset, cropped to 256px, width color jitter dataset = Div2K( root="./data", scale=2, download=False, transform=Compose([ RandomCrop(256, scales=[1, 2]), ColorJitter(brightness=0.2) ])) # Pretrained RCAN model, with tiling for large images model = ChoppedModel( rcan(scale=2, pretrained=True), scale=2, chop_size=400, chop_overlap=10) # Pretrained EDSR model, with self-ensemble method for higher quality model = SelfEnsembleModel(edsr(scale=2, pretrained=True)) ```Training
A script is available to train the models from scratch, evaluate them, and much more. It is not part of the pip package, and requires additional dependencies. More examples are available in scripts/.
bash
pip install piq tqdm tensorboard # Additional dependencies
python -m torchsr.train -h
python -m torchsr.train --arch edsr_baseline --scale 2 --download-pretrained --images test/butterfly.png --destination results/
python -m torchsr.train --arch edsr_baseline --scale 2 --download-pretrained --validation-only
python -m torchsr.train --arch edsr_baseline --scale 2 --epochs 300 --loss l1 --dataset-train div2k_bicubic
You can evaluate models from the command line as well. For example, for EDSR with the paper's PSNR evaluation:
python -m torchsr.train --validation-only --arch edsr_baseline --scale 2 --dataset-val set5 --chop-size 400 --download-pretrained --shave-border 2 --eval-luminance
Acknowledgements
Thanks to the people behind torchvision and EDSR, whose work inspired this repository. Some of the models available here come from EDSR-PyTorch and CARN-PyTorch.
To cite this work, please use:
``` @misc{torchsr, author = {Gabriel Gouvine}, title = {Super-Resolution Networks for Pytorch}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/Coloquinte/torchSR}}, doi = {10.5281/zenodo.4868308} }
@misc{ninasr, author = {Gabriel Gouvine}, title = {NinaSR: Efficient Small and Large ConvNets for Super-Resolution}, year = {2021}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/Coloquinte/torchSR/blob/main/doc/NinaSR.md}}, doi = {10.5281/zenodo.4868308} } ```
Owner
- Name: Gabriel Gouvine
- Login: Coloquinte
- Kind: user
- Location: Edinburgh
- Company: AMD
- Repositories: 36
- Profile: https://github.com/Coloquinte
Citation (CITATION.cff)
cff-version: 1.2.0
title: torchSR
message: >-
If you use this software and want to cite it, please use
the citation below.
type: software
authors:
- family-names: Gouvine
given-names: Gabriel
orcid: 0000-0003-3404-6659
repository-code: 'https://github.com/Coloquinte/torchSR'
keywords:
- superresolution
- pytorch
- EDSR
- RCAN
license: MIT
date-released: 2021-05-30
doi: 10.5281/zenodo.4868308
GitHub Events
Total
- Watch event: 20
- Fork event: 2
Last Year
- Watch event: 20
- Fork event: 2
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Gabriel Gouvine | g****T@g****m | 178 |
| Gabriel Gouvine | g****t@m****g | 16 |
| txfs19260817 | d****5@g****m | 2 |
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 12
- Total pull requests: 1
- Average time to close issues: 2 days
- Average time to close pull requests: 1 day
- Total issue authors: 9
- Total pull request authors: 1
- Average comments per issue: 2.25
- Average comments per pull request: 1.0
- Merged pull requests: 1
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- OwEnXiaoMing (3)
- vishwa91 (2)
- Fafa87 (1)
- lchunleo (1)
- sc-56 (1)
- jcampbell05 (1)
- FrancescoSaverioZuppichini (1)
- wslgqq277g (1)
- erjihaoshi (1)
Pull Request Authors
- txfs19260817 (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 30,970 last-month
- Total dependent packages: 0
- Total dependent repositories: 1
- Total versions: 3
- Total maintainers: 1
pypi.org: torchsr
Super Resolution Networks for pytorch
- Homepage: https://github.com/Coloquinte/torchSR
- Documentation: https://torchsr.readthedocs.io/
- License: MIT
-
Latest release: 1.0.4
published over 3 years ago
Rankings
Maintainers (1)
Dependencies
- torch >=1.6