Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (9.3%) to scientific vocabulary
Repository
Basic Info
- Host: GitHub
- Owner: qiu-p
- License: agpl-3.0
- Language: Python
- Default Branch: master
- Size: 1.38 MB
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 3
- Releases: 0
Metadata Files
README.md
The model (MoE-SR) is designed as an analogy to the structure of the Hybrid Expert Model (MoE) in the field of natural language processing. MoE-SR consists of a gating network and multiple expert networks, as well as a hybrid output module. The gating network primarily segments LR images based on their constituent categories in order to assign them to respective expert networks. The expert networks are responsible for upsampling the segmented portions allocated by the gating network to obtain high-resolution sub-images. Finally, the hybrid output module utilizes the Alpha Blending algorithm to stitch the sub-images together to generate the final high-resolution image. After balancing the trade-off between the inference speed and performance of MoE-SR, we modified Yolo to obtain the gating network.
MoE-SR can be regarded as a generic architecture for improving the performance of existing SR models. Therefore, for the expert networks, we adopt the form of network containers, which means existing SR models can be used for replacement. To fully leverage the performance of MoE-SR, we establish an HR-LR paired training set with additional category labels to train MoE-SR. Furthermore, through additional comparative experiments, when different SR models are used as expert networks, we can observe consistent specialization in handling images with corresponding category labels.
Requirement
- Python >= 3.7 (Recommend to use Anaconda or Miniconda)
- PyTorch >= 1.7
- NVIDIA GPU + CUDA
- Linux (We have not tested on Windows)
Usage
python test_pic.py --opt VDSR/tools/process_img2/options/option_set_03_vdsrwithsegJiekou/options.yml
Dataset
Owner
- Name: qiu-p
- Login: qiu-p
- Kind: user
- Repositories: 1
- Profile: https://github.com/qiu-p
Citation (CITATION.cff)
# This CITATION.cff file was generated with https://bit.ly/cffinit
cff-version: 1.2.0
title: Ultralytics YOLO
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- given-names: Glenn
family-names: Jocher
affiliation: Ultralytics
orcid: 'https://orcid.org/0000-0001-5950-6979'
- given-names: Ayush
family-names: Chaurasia
affiliation: Ultralytics
orcid: 'https://orcid.org/0000-0002-7603-6750'
- family-names: Qiu
given-names: Jing
affiliation: Ultralytics
orcid: 'https://orcid.org/0000-0003-3783-7069'
repository-code: 'https://github.com/ultralytics/ultralytics'
url: 'https://ultralytics.com'
license: AGPL-3.0
version: 8.0.0
date-released: '2023-01-10'
GitHub Events
Total
Last Year
Dependencies
- actions/checkout v4 composite
- actions/setup-python v5 composite
- codecov/codecov-action v4 composite
- conda-incubator/setup-miniconda v3 composite
- slackapi/slack-github-action v1.25.0 composite
- contributor-assistant/github-action v2.3.1 composite
- actions/checkout v4 composite
- github/codeql-action/analyze v3 composite
- github/codeql-action/init v3 composite
- actions/checkout v4 composite
- docker/login-action v3 composite
- docker/setup-buildx-action v3 composite
- docker/setup-qemu-action v3 composite
- nick-invision/retry v3 composite
- slackapi/slack-github-action v1.25.0 composite
- ultralytics/actions main composite
- actions/first-interaction v1 composite
- actions/checkout v4 composite
- nick-invision/retry v3 composite
- actions/checkout v4 composite
- actions/setup-python v5 composite
- slackapi/slack-github-action v1.25.0 composite
- actions/stale v9 composite
- pytorch/pytorch 2.2.0-cuda12.1-cudnn8-runtime build
- Pillow *
- addict *
- future *
- lmdb *
- numpy *
- opencv-python *
- pyyaml *
- recommonmark *
- requests *
- scikit-image *
- scipy *
- sphinx *
- sphinx_intl *
- sphinx_markdown_tables *
- sphinx_rtd_theme *
- tb-nightly *
- torch >=1.7
- torchvision *
- tqdm *
- yapf *
- Pillow *
- addict *
- future *
- lmdb *
- numpy >=1.17
- opencv-python *
- pyyaml *
- requests *
- scikit-image *
- scipy *
- tb-nightly *
- torch >=1.7
- torchvision *
- tqdm *
- yapf *
- matplotlib >=3.3.0
- opencv-python >=4.6.0
- pandas >=1.1.4
- pillow >=7.1.2
- psutil *
- py-cpuinfo *
- pyyaml >=5.3.1
- requests >=2.23.0
- scipy >=1.4.1
- seaborn >=0.11.0
- thop >=0.1.1
- torch >=1.8.0
- torchvision >=0.9.0
- tqdm >=4.64.0