161-drct-saving-image-super-resolution-away-from-information-bottleneck
Science Score: 41.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
○.zenodo.json file
-
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (10.1%) to scientific vocabulary
Scientific Fields
Artificial Intelligence and Machine Learning
Computer Science -
40% confidence
Last synced: 4 months ago
·
JSON representation
·
Repository
Basic Info
- Host: GitHub
- Owner: SZU-AdvTech-2024
- Default Branch: main
- Size: 0 Bytes
Statistics
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
- Releases: 0
Created 12 months ago
· Last pushed 12 months ago
Metadata Files
Citation
https://github.com/SZU-AdvTech-2024/161-DRCT-Saving-Image-Super-resolution-away-from-Information-Bottleneck/blob/main/
# DRCT: Saving Image Super-resolution away from Information Bottleneck ## [](https://arxiv.org/abs/2404.00722) [](https://github.com/ming053l/DRCT) [](https://drive.google.com/drive/folders/1QJHdSfo-0eFNb96i8qzMJAPw31u9qZ7U?usp=sharing) ## Environment - [PyTorch >= 1.7](https://pytorch.org/) **(Recommend **NOT** using torch 1.8 and **1.12** !!! It would cause abnormal performance.)** - [BasicSR == 1.3.4.9](https://github.com/XPixelGroup/BasicSR/blob/master/INSTALL.md) ### Installation ``` git clone https://github.com/ming053l/DRCT.git conda create --name drct python=3.8 -y conda activate drct # CUDA 11.6 conda install pytorch==1.12.1 torchvision==0.13.1 torchaudio==0.12.1 cudatoolkit=11.6 -c pytorch -c conda-forge cd DRCT pip install -r requirements.txt python setup.py develop ``` ## How To Inference on your own Dataset? ``` python inference.py --input_dir [input_dir ] --output_dir [input_dir ] --model_path[model_path] ``` ## How To Test - Refer to `./options/test` for the configuration file of the model to be tested, and prepare the testing data and pretrained model. - Then run the following codes (taking `DRCT_SRx4_ImageNet-pretrain.pth` as an example): ``` python drct/test.py -opt options/test/DRCT_SRx4_ImageNet-pretrain.yml ``` The testing results will be saved in the `./results` folder. - Refer to `./options/test/DRCT_SRx4_ImageNet-LR.yml` for **inference** without the ground truth image. **Note that the tile mode is also provided for limited GPU memory when testing. You can modify the specific settings of the tile mode in your custom testing option by referring to `./options/test/DRCT_tile_example.yml`.** ## How To Train - Refer to `./options/train` for the configuration file of the model to train. - Preparation of training data can refer to [this page](https://github.com/XPixelGroup/BasicSR/blob/master/docs/DatasetPreparation.md). ImageNet dataset can be downloaded at the [official website](https://image-net.org/challenges/LSVRC/2012/2012-downloads.php). - Validation data can be download at [this page](https://github.com/ChaofWang/Awesome-Super-Resolution/blob/master/dataset.md). - The training command is like ``` CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 drct/train.py -opt options/train/train_DRCT_SRx2_from_scratch.yml --launcher pytorch ``` The training logs and weights will be saved in the `./experiments` folder.
Owner
- Name: SZU-AdvTech-2024
- Login: SZU-AdvTech-2024
- Kind: organization
- Repositories: 1
- Profile: https://github.com/SZU-AdvTech-2024
Citation (citation.txt)
@inproceedings{REPO161,
author = "Hsu, Chih-Chung and Lee, Chia-Ming and Chou, Yi-Shiuan",
booktitle = "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops",
month = "June",
pages = "6133-6142",
title = "{DRCT: Saving Image Super-Resolution Away from Information Bottleneck}",
year = "2024"
}
GitHub Events
Total
- Push event: 3
- Create event: 3
Last Year
- Push event: 3
- Create event: 3