Science Score: 36.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
○.zenodo.json file
-
✓DOI references
Found 1 DOI reference(s) in README -
✓Academic publication links
Links to: arxiv.org, ieee.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (15.1%) to scientific vocabulary
Keywords
brisque
fid
gan
generative-models
image-metrics
image-quality
image-quality-assessment
image-to-image
iqa
kid
measures
metrics
ms-ssim
mse
psnr
python3
pytorch
ssim
vif
Last synced: 6 months ago
·
JSON representation
Repository
Measures and metrics for image2image tasks. PyTorch.
Basic Info
Statistics
- Stars: 1,512
- Watchers: 12
- Forks: 123
- Open Issues: 48
- Releases: 17
Topics
brisque
fid
gan
generative-models
image-metrics
image-quality
image-quality-assessment
image-to-image
iqa
kid
measures
metrics
ms-ssim
mse
psnr
python3
pytorch
ssim
vif
Created about 6 years ago
· Last pushed almost 2 years ago
Metadata Files
Readme
Contributing
License
README.rst
.. image:: https://raw.githubusercontent.com/photosynthesis-team/piq/master/docs/source/_static/piq_logo_main.png
:target: https://github.com/photosynthesis-team/piq
..
PyTorch Image Quality (PIQ) is not endorsed by Facebook, Inc.;
PyTorch, the PyTorch logo and any related marks are trademarks of Facebook, Inc.
|pypy| |conda| |flake8| |tests| |codecov| |quality_gate|
.. |pypy| image:: https://badge.fury.io/py/piq.svg
:target: https://pypi.org/project/piq/
:alt: Pypi Version
.. |conda| image:: https://anaconda.org/photosynthesis-team/piq/badges/version.svg
:target: https://anaconda.org/photosynthesis-team/piq
:alt: Conda Version
.. |flake8| image:: https://github.com/photosynthesis-team/piq/workflows/flake-8%20style%20check/badge.svg
:alt: CI flake-8 style check
.. |tests| image:: https://github.com/photosynthesis-team/piq/workflows/testing/badge.svg
:alt: CI testing
.. |codecov| image:: https://codecov.io/gh/photosynthesis-team/piq/branch/master/graph/badge.svg
:target: https://codecov.io/gh/photosynthesis-team/piq
:alt: codecov
.. |quality_gate| image:: https://sonarcloud.io/api/project_badges/measure?project=photosynthesis-team_photosynthesis.metrics&metric=alert_status
:target: https://sonarcloud.io/dashboard?id=photosynthesis-team_photosynthesis.metrics
:alt: Quality Gate Status
.. intro-section-start
`PyTorch Image Quality (PIQ) `_ is a collection of measures and metrics for
image quality assessment. PIQ helps you to concentrate on your experiments without the boilerplate code.
The library contains a set of measures and metrics that is continually getting extended.
For measures/metrics that can be used as loss functions, corresponding PyTorch modules are implemented.
We provide:
* Unified interface, which is easy to use and extend.
* Written on pure PyTorch with bare minima of additional dependencies.
* Extensive user input validation. Your code will not crash in the middle of the training.
* Fast (GPU computations available) and reliable.
* Most metrics can be backpropagated for model optimization.
* Supports python 3.7-3.10.
PIQ was initially named `PhotoSynthesis.Metrics `_.
.. intro-section-end
.. installation-section-start
Installation
------------
`PyTorch Image Quality (PIQ) `_ can be installed using ``pip``, ``conda`` or ``git``.
If you use ``pip``, you can install it with:
.. code-block:: sh
$ pip install piq
If you use ``conda``, you can install it with:
.. code-block:: sh
$ conda install piq -c photosynthesis-team -c conda-forge -c PyTorch
If you want to use the latest features straight from the master, clone `PIQ repo `_:
.. code-block:: sh
git clone https://github.com/photosynthesis-team/piq.git
cd piq
python setup.py install
.. installation-section-end
.. documentation-section-start
Documentation
-------------
The full documentation is available at https://piq.readthedocs.io.
.. documentation-section-end
.. usage-examples-start
Usage Examples
---------------
Image-Based metrics
^^^^^^^^^^^^^^^^^^^
The group of metrics (such as PSNR, SSIM, BRISQUE) takes an image or a pair of images as input to compute a distance between them.
We have a functional interface, which returns a metric value, and a class interface, which allows to use any metric
as a loss function.
.. code-block:: python
import torch
from piq import ssim, SSIMLoss
x = torch.rand(4, 3, 256, 256, requires_grad=True)
y = torch.rand(4, 3, 256, 256)
ssim_index: torch.Tensor = ssim(x, y, data_range=1.)
loss = SSIMLoss(data_range=1.)
output: torch.Tensor = loss(x, y)
output.backward()
For a full list of examples, see `image metrics `_ examples.
Distribution-Based metrics
^^^^^^^^^^^^^^^^^^^^^^^^^^
The group of metrics (such as IS, FID, KID) takes a list of image features to compute the distance between distributions.
Image features can be extracted by some feature extractor network separately or by using the ``compute_feats`` method of a
class.
Note:
``compute_feats`` consumes a data loader of a predefined format.
.. code-block:: python
import torch
from torch.utils.data import DataLoader
from piq import FID
first_dl, second_dl = DataLoader(), DataLoader()
fid_metric = FID()
first_feats = fid_metric.compute_feats(first_dl)
second_feats = fid_metric.compute_feats(second_dl)
fid: torch.Tensor = fid_metric(first_feats, second_feats)
If you already have image features, use the class interface for score computation:
.. code-block:: python
import torch
from piq import FID
x_feats = torch.rand(10000, 1024)
y_feats = torch.rand(10000, 1024)
msid_metric = MSID()
msid: torch.Tensor = msid_metric(x_feats, y_feats)
For a full list of examples, see `feature metrics `_ examples.
.. usage-examples-end
.. list-of-metrics-start
List of metrics
---------------
Full-Reference (FR)
^^^^^^^^^^^^^^^^^^^
=========== ====== ==========
Acronym Year Metric
=========== ====== ==========
PSNR \- `Peak Signal-to-Noise Ratio `_
SSIM 2003 `Structural Similarity `_
MS-SSIM 2004 `Multi-Scale Structural Similarity `_
IW-SSIM 2011 `Information Content Weighted Structural Similarity Index `_
VIFp 2004 `Visual Information Fidelity `_
FSIM 2011 `Feature Similarity Index Measure `_
SR-SIM 2012 `Spectral Residual Based Similarity `_
GMSD 2013 `Gradient Magnitude Similarity Deviation `_
MS-GMSD 2017 `Multi-Scale Gradient Magnitude Similarity Deviation `_
VSI 2014 `Visual Saliency-induced Index `_
DSS 2015 `DCT Subband Similarity Index `_
\- 2016 `Content Score `_
\- 2016 `Style Score `_
HaarPSI 2016 `Haar Perceptual Similarity Index `_
MDSI 2016 `Mean Deviation Similarity Index `_
LPIPS 2018 `Learned Perceptual Image Patch Similarity `_
PieAPP 2018 `Perceptual Image-Error Assessment through Pairwise Preference `_
DISTS 2020 `Deep Image Structure and Texture Similarity `_
=========== ====== ==========
No-Reference (NR)
^^^^^^^^^^^^^^^^^
=========== ====== ==========
Acronym Year Metric
=========== ====== ==========
TV 1937 `Total Variation `_
BRISQUE 2012 `Blind/Referenceless Image Spatial Quality Evaluator `_
CLIP-IQA 2022 `CLIP-IQA `_
=========== ====== ==========
Distribution-Based (DB)
^^^^^^^^^^^^^^^^^^^^^^^
=========== ====== ==========
Acronym Year Metric
=========== ====== ==========
IS 2016 `Inception Score `_
FID 2017 `Frechet Inception Distance `_
GS 2018 `Geometry Score `_
KID 2018 `Kernel Inception Distance `_
MSID 2019 `Multi-Scale Intrinsic Distance `_
PR 2019 `Improved Precision and Recall `_
=========== ====== ==========
.. list-of-metrics-end
.. benchmark-section-start
Benchmark
---------
As part of our library we provide `code to benchmark `_ all metrics on a set of common Mean Opinon Scores databases.
Currently we support several Full-Reference (`TID2013`_, `KADID10k`_ and `PIPAL`_) and No-Reference (`KonIQ10k`_ and `LIVE-itW`_) datasets.
You need to download them separately and provide path to images as an argument to the script.
Here is an example how to evaluate SSIM and MS-SSIM metrics on TID2013 dataset:
.. code-block:: bash
python3 tests/results_benchmark.py --dataset tid2013 --metrics SSIM MS-SSIM --path ~/datasets/tid2013 --batch_size 16
Below we provide a comparison between `Spearman's Rank Correlation Coefficient `_ (SRCC) values obtained with PIQ and reported in surveys.
Closer SRCC values indicate the higher degree of agreement between results of computations on given datasets.
We do not report `Kendall rank correlation coefficient `_ (KRCC)
as it is highly correlated with SRCC and provides limited additional information.
We do not report `Pearson linear correlation coefficient `_ (PLCC)
as it's highly dependent on fitting method and is biased towards simple examples.
For metrics that can take greyscale or colour images, ``c`` means chromatic version.
Full-Reference (FR) Datasets
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
=========== =========================== =========================== ===========================
\ TID2013 KADID10k PIPAL
----------- --------------------------- --------------------------- ---------------------------
Source PIQ / Reference PIQ / Reference PIQ / Reference
=========== =========================== =========================== ===========================
PSNR 0.69 / 0.69 `TID2013`_ 0.68 / - 0.41 / 0.41 `PIPAL`_
SSIM 0.72 / 0.64 `TID2013`_ 0.72 / 0.72 `KADID10k`_ 0.50 / 0.53 `PIPAL`_
MS-SSIM 0.80 / 0.79 `TID2013`_ 0.80 / 0.80 `KADID10k`_ 0.55 / 0.46 `PIPAL`_
IW-SSIM 0.78 / 0.78 `Eval2019`_ 0.85 / 0.85 `KADID10k`_ 0.60 / -
VIFp 0.61 / 0.61 `TID2013`_ 0.65 / 0.65 `KADID10k`_ 0.50 / -
FSIM 0.80 / 0.80 `TID2013`_ 0.83 / 0.83 `KADID10k`_ 0.59 / 0.60 `PIPAL`_
FSIMc 0.85 / 0.85 `TID2013`_ 0.85 / 0.85 `KADID10k`_ 0.59 / -
SR-SIM 0.81 / 0.81 `Eval2019`_ 0.84 / 0.84 `KADID10k`_ 0.57 / -
SR-SIMc 0.87 / - 0.87 / - 0.57 / -
GMSD 0.80 / 0.80 `MS-GMSD`_ 0.85 / 0.85 `KADID10k`_ 0.58 / -
VSI 0.90 / 0.90 `Eval2019`_ 0.88 / 0.86 `KADID10k`_ 0.54 / -
DSS 0.79 / 0.79 `Eval2019`_ 0.86 / 0.86 `KADID10k`_ 0.63 / -
Content 0.71 / - 0.72 / - 0.45 / -
Style 0.54 / - 0.65 / - 0.34 / -
HaarPSI 0.87 / 0.87 `HaarPSI`_ 0.89 / 0.89 `KADID10k`_ 0.59 / -
MDSI 0.89 / 0.89 `MDSI`_ 0.89 / 0.89 `KADID10k`_ 0.59 / -
MS-GMSD 0.81 / 0.81 `MS-GMSD`_ 0.85 / - 0.59 / -
MS-GMSDc 0.89 / 0.89 `MS-GMSD`_ 0.87 / - 0.59 / -
LPIPS-VGG 0.67 / 0.67 `DISTS`_ 0.72 / - 0.57 / 0.58 `PIPAL`_
PieAPP 0.84 / 0.88 `DISTS`_ 0.87 / - 0.70 / 0.71 `PIPAL`_
DISTS 0.81 / 0.83 `DISTS`_ 0.88 / - 0.62 / 0.66 `PIPAL`_
BRISQUE 0.37 / 0.84 `Eval2019`_ 0.33 / 0.53 `KADID10k`_ 0.21 / -
CLIP-IQA 0.50 / - 0.48 / - 0.26 / -
IS 0.26 / - 0.25 / - 0.09 / -
FID 0.67 / - 0.66 / - 0.18 / -
KID 0.42 / - 0.66 / - 0.12 / -
MSID 0.21 / - 0.32 / - 0.01 / -
GS 0.37 / - 0.37 / - 0.02 / -
=========== =========================== =========================== ===========================
No-Reference (NR) Datasets
^^^^^^^^^^^^^^^^^^^^^^^^^^
=========== =========================== ===========================
\ KonIQ10k LIVE-itW
----------- --------------------------- ---------------------------
Source PIQ / Reference PIQ / Reference
=========== =========================== ===========================
BRISQUE 0.22 / - 0.31 / -
CLIP-IQA 0.68 / 0.68 `CLIP-IQA off`_ 0.64 / 0.64 `CLIP-IQA off`_
=========== =========================== ===========================
.. _TID2013: http://www.ponomarenko.info/tid2013.htm
.. _KADID10k: http://database.mmsp-kn.de/kadid-10k-database.html
.. _Eval2019: https://ieeexplore.ieee.org/abstract/document/8847307/
.. _`MDSI`: https://arxiv.org/abs/1608.07433
.. _MS-GMSD: https://ieeexplore.ieee.org/document/7952357
.. _DISTS: https://arxiv.org/abs/2004.07728
.. _HaarPSI: https://arxiv.org/abs/1607.06140
.. _PIPAL: https://arxiv.org/pdf/2011.15002.pdf
.. _IW-SSIM: https://ieeexplore.ieee.org/document/7442122
.. _KonIQ10k: http://database.mmsp-kn.de/koniq-10k-database.html
.. _LIVE-itW: https://live.ece.utexas.edu/research/ChallengeDB/index.html
.. _CLIP-IQA off: https://github.com/IceClear/CLIP-IQA
Unlike FR and NR IQMs, designed to compute an image-wise distance, the DB metrics compare distributions of *sets* of images.
To address these problems, we adopt a different way of computing the DB IQMs proposed in ``_.
Instead of extracting features from the whole images, we crop them into overlapping tiles of size ``96 × 96`` with ``stride = 32``.
This pre-processing allows us to treat each pair of images as a pair of distributions of tiles, enabling further comparison.
The other stages of computing the DB IQMs are kept intact.
.. benchmark-section-end
.. assertions-section-start
Assertions
----------
In PIQ we use assertions to raise meaningful messages when some component doesn't receive an input of the expected type.
This makes prototyping and debugging easier, but it might hurt the performance.
To disable all checks, use the Python ``-O`` flag: ``python -O your_script.py``
.. assertions-section-end
Roadmap
-------
See the `open issues `_ for a list of proposed
features and known issues.
Contributing
------------
If you would like to help develop this library, you'll find more information in our `contribution guide `_.
.. citation-section-start
Citation
--------
If you use PIQ in your project, please, cite it as follows.
.. code-block:: tex
@misc{kastryulin2022piq,
title = {PyTorch Image Quality: Metrics for Image Quality Assessment},
url = {https://arxiv.org/abs/2208.14818},
author = {Kastryulin, Sergey and Zakirov, Jamil and Prokopenko, Denis and Dylov, Dmitry V.},
doi = {10.48550/ARXIV.2208.14818},
publisher = {arXiv},
year = {2022}
}
.. code-block:: tex
@misc{piq,
title={{PyTorch Image Quality}: Metrics and Measure for Image Quality Assessment},
url={https://github.com/photosynthesis-team/piq},
note={Open-source software available at https://github.com/photosynthesis-team/piq},
author={Sergey Kastryulin and Dzhamil Zakirov and Denis Prokopenko},
year={2019}
}
.. citation-section-end
.. contacts-section-start
Contacts
--------
**Sergey Kastryulin** - `@snk4tr `_ - ``snk4tr@gmail.com``
**Jamil Zakirov** - `@zakajd `_ - ``djamilzak@gmail.com``
**Denis Prokopenko** - `@denproc `_ - ``d.prokopenko@outlook.com``
.. contacts-section-end
Owner
- Name: photosynthesis-team
- Login: photosynthesis-team
- Kind: organization
- Repositories: 1
- Profile: https://github.com/photosynthesis-team
GitHub Events
Total
- Issues event: 2
- Watch event: 116
- Issue comment event: 2
- Fork event: 8
Last Year
- Issues event: 2
- Watch event: 116
- Issue comment event: 2
- Fork event: 8
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Sergey Kastryulin | s****r@g****m | 101 |
| Denis Prokopenko | 2****c | 51 |
| Jamil | d****k@g****m | 50 |
| Jamil | d****v@p****m | 3 |
| Sarah G | l****i@g****m | 2 |
| nevolin-dmitry-leonid | 3****d | 1 |
| Sergei Belousov | b****2@y****u | 1 |
| Rafael Bischof | 3****f | 1 |
| Pooya Mohammadi Kazaj | p****j@g****m | 1 |
| Pavel Ostyakov | p****a@g****m | 1 |
| Mikhail | 4****1 | 1 |
| Héctor Laria | h****a@h****m | 1 |
| Dmitry V'yal | d****l@g****m | 1 |
Committer Domains (Top 20 + Academic)
yandex.ru: 1
philips.com: 1
Issues and Pull Requests
Last synced: 7 months ago
All Time
- Total issues: 78
- Total pull requests: 46
- Average time to close issues: 4 months
- Average time to close pull requests: 20 days
- Total issue authors: 41
- Total pull request authors: 10
- Average comments per issue: 1.56
- Average comments per pull request: 2.61
- Merged pull requests: 34
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 4
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 3
- Pull request authors: 0
- Average comments per issue: 0.0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- snk4tr (14)
- zakajd (13)
- denproc (7)
- luisbarrancos (2)
- lxy51 (2)
- markdjwilliams (2)
- alankras (2)
- adamtheturtle (2)
- cyun-404 (1)
- yinboc (1)
- hubutui (1)
- tazwar22 (1)
- beyzacevik (1)
- jinxiqinghuan (1)
- akulpillai (1)
Pull Request Authors
- denproc (19)
- snk4tr (8)
- zakajd (8)
- adamtheturtle (3)
- bonlime (2)
- pravinboopathy (2)
- pooya-mohammadi (1)
- merunes-goldman (1)
- Pqlet (1)
Top Labels
Issue Labels
bug (34)
feature (28)
enhancement (6)
documentation (2)
question (2)
good first issue (2)
refactoring (1)
wontfix (1)
Pull Request Labels
bug (7)
feature (4)
enhancement (2)
documentation (1)
Packages
- Total packages: 1
-
Total downloads:
- pypi 73,287 last-month
- Total docker downloads: 392
- Total dependent packages: 7
- Total dependent repositories: 77
- Total versions: 10
- Total maintainers: 3
pypi.org: piq
Measures and metrics for image2image tasks. PyTorch.
- Homepage: https://github.com/photosynthesis-team/piq
- Documentation: https://piq.readthedocs.io/
- License: Apache Software License
-
Latest release: 0.8.0
published over 2 years ago
Rankings
Dependent packages count: 1.6%
Dependent repos count: 1.7%
Downloads: 1.8%
Stargazers count: 1.9%
Average: 2.5%
Docker downloads count: 3.7%
Forks count: 4.4%
Last synced:
7 months ago
Dependencies
.github/workflows/cd-conda.yml
actions
- actions/checkout v2 composite
- actions/setup-python v2 composite
.github/workflows/cd-pypi.yml
actions
- actions/checkout v2 composite
- actions/setup-python v2 composite
.github/workflows/ci-linting.yml
actions
- actions/checkout v2 composite
- actions/setup-python v2 composite
.github/workflows/ci-mypy.yml
actions
- actions/checkout v2 composite
- actions/setup-python v2 composite
.github/workflows/ci-testing.yml
actions
- actions/cache v2 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- codecov/codecov-action v1 composite
docs/requirements.txt
pypi
- readthedocs-sphinx-search *
- sphinx *
- sphinx_rtd_theme *
requirements.txt
pypi
- torchvision >=0.10.0