Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (8.5%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: JinchengLiang
  • License: apache-2.0
  • Language: Jupyter Notebook
  • Default Branch: main
  • Size: 12.4 MB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created over 2 years ago · Last pushed about 2 years ago
Metadata Files
Readme Contributing License Code of conduct Citation

README.md

VSR

Files Location

Installation

All related information is available at mmagic/README.md.

Dadasets

All datasets are in toos/dataset_converters.

Analysis

All analysis tools are in tools/analysis_tools.

Outputs

All outputs are saved in outputs.

Quick Start

We write IconVSR+ model in iconvsr_net.py to avoid customer model registry problem.
To use the wavelet-based loss, modify the type of pixel_loss in the config to WCPatchLoss.

Train

You can use the following commands to train a model with cpu or single/multiple GPUs. ```

cpu train

CUDAVISIBLEDEVICES=-1 python tools/train.py configs/iconvsr/iconvsr2xb4reds4.py

single-gpu train

python tools/train.py configs/iconvsr/iconvsr2xb4reds4.py

multi-gpu train

./tools/disttrain.sh configs/iconvsr/iconvsr2xb4_reds4.py 8 ```

Test

You can use the following commands to test a model with cpu or single/multiple GPUs.

Download best_PSNR.pth first, and then save this path in directory work_dirs/iconvsr_2xb4_reds4. ```

cpu test

CUDAVISIBLEDEVICES=-1 python tools/test.py configs/iconvsr/iconvsr2xb4reds4.py workdirs/iconvsr2xb4reds4/bestPSNRiter29000.pth

single-gpu test

python tools/test.py configs/iconvsr/iconvsr2xb4reds4.py workdirs/iconvsr2xb4reds4/bestPSNRiter29000.pth

multi-gpu test

./tools/disttest.sh configs/iconvsr/iconvsr2xb4reds4.py workdirs/iconvsr2xb4reds4/bestPSNRiter_29000.pth 8 ```

Citation

@misc{mmagic2023, title = {{MMagic}: {OpenMMLab} Multimodal Advanced, Generative, and Intelligent Creation Toolbox}, author = {{MMagic Contributors}}, howpublished = {\url{https://github.com/open-mmlab/mmagic}}, year = {2023} } @misc{mmediting2022, title = {{MMEditing}: {OpenMMLab} Image and Video Editing Toolbox}, author = {{MMEditing Contributors}}, howpublished = {\url{https://github.com/open-mmlab/mmediting}}, year = {2022} }

Owner

  • Login: JinchengLiang
  • Kind: user

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - family-names: MMagic
    given-names: Contributors
title: "MMagic: OpenMMLab Multimodal Advanced, Generative, and Intelligent Creation Toolbox"
version: 1.0.0
date-released: 2023-04-25
url: "https://github.com/open-mmlab/mmagic"
license: Apache-2.0

GitHub Events

Total
Last Year

Dependencies

.github/workflows/lint.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v2 composite
.github/workflows/merge_stage_test.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • codecov/codecov-action v1.0.14 composite
.github/workflows/pr_stage_test.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • codecov/codecov-action v1.0.14 composite
.github/workflows/publish-to-pypi.yml actions
  • actions/checkout v2 composite
  • actions/setup-python v1 composite
.github/workflows/test_mim.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.circleci/docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
requirements/docs.txt pypi
  • docutils ==0.16.0
  • modelindex *
  • myst_parser *
  • requests <=2.29.0
  • sphinx ==4.5.0
  • sphinx-autoapi *
  • sphinx-copybutton *
  • sphinx-notfound-page *
  • sphinx-tabs *
  • sphinx_markdown_tables *
requirements/mminstall.txt pypi
  • mmcv >=2.0.0
  • mmengine >=0.4.0
requirements/optional.txt pypi
  • PyQt5 *
  • albumentations *
  • imageio-ffmpeg ==0.4.4
  • mmdet >=3.0.0
  • open_clip_torch *
requirements/readthedocs.txt pypi
  • Pygments *
  • lmdb *
  • lpips *
  • mmcv >=2.0.0rc1
  • mmdet >=3.0.0
  • mmengine *
  • prettytable *
  • regex *
  • scikit-image *
  • tabulate *
  • titlecase *
  • torch *
  • torchvision *
  • tqdm *
requirements/runtime.txt pypi
  • Pillow *
  • av ==8.0.3
  • av *
  • click *
  • controlnet_aux *
  • diffusers >=0.23.0
  • einops *
  • face-alignment <=1.3.4
  • facexlib *
  • lmdb *
  • lpips *
  • mediapipe *
  • numpy *
  • opencv-python *
  • pandas *
  • resize_right *
  • tensorboard *
  • transformers >=4.27.4
requirements/tests.txt pypi
  • albumentations * test
  • controlnet_aux * test
  • coverage <7.0.0 test
  • imageio-ffmpeg ==0.4.4 test
  • interrogate * test
  • mmdet >=3.0.0 test
  • pytest * test
  • transformers >=4.27.4 test
requirements.txt pypi
setup.py pypi