plenoptic

Visualize/test models for visual representation by synthesizing images.

https://github.com/plenoptic-org/plenoptic

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 6 DOI reference(s) in README
  • Academic publication links
    Links to: zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (18.9%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Visualize/test models for visual representation by synthesizing images.

Basic Info
  • Host: GitHub
  • Owner: plenoptic-org
  • License: mit
  • Language: Python
  • Default Branch: main
  • Homepage: https://docs.plenoptic.org/
  • Size: 589 MB
Statistics
  • Stars: 74
  • Watchers: 8
  • Forks: 13
  • Open Issues: 84
  • Releases: 7
Created over 6 years ago · Last pushed 7 months ago
Metadata Files
Readme Contributing License Code of conduct Citation Codeowners

README.md

plenoptic

PyPI Version Anaconda-Server Badge License: MIT Python version Build Status DOI codecov Binder Project Status: Active – The project has reached a stable, usable state and is being actively developed. Code style: Ruff

plenoptic logo

plenoptic is a python library for model-based synthesis of perceptual stimuli. For plenoptic, models are those of visual[^1] information processing: they accept an image[^2] as input, perform some computations, and return some output, which can be mapped to neuronal firing rate, fMRI BOLD response, behavior on some task, image category, etc. The intended audience is researchers in neuroscience, psychology, and machine learning. The generated stimuli enable interpretation of model properties through examination of features that are enhanced, suppressed, or discarded. More importantly, they can facilitate the scientific process, through use in further perceptual or neural experiments aimed at validating or falsifying model predictions.

See our documentation site for more details, including how to get started!

Installation

The best way to install plenoptic is via pip:

bash $ pip install plenoptic

or conda:

bash $ conda install plenoptic -c conda-forge

Our dependencies include pytorch and pyrtools. Installation should take care of them (along with our other dependencies) automatically, but if you have an installation problem (especially on a non-Linux operating system), it is likely that the problem lies with one of those packages. Open an issue and we'll try to help you figure out the problem!

See the installation page for more details, including how to set up a virtual environment and jupyter.

ffmpeg and videos

Several methods in this package generate videos. There are several backends possible for saving the animations to file, see matplotlib documentation for more details. In order convert them to HTML5 for viewing (and thus, to view in a jupyter notebook), you'll need ffmpeg installed and on your path as well. Depending on your system, this might already be installed, but if not, the easiest way is probably through conda: conda install -c conda-forge ffmpeg.

To change the backend, run matplotlib.rcParams['animation.writer'] = writer before calling any of the animate functions. If you try to set that rcParam with a random string, matplotlib will tell you the available choices.

Contents

Synthesis methods

  • Metamers: given a model and a reference image, stochastically generate a new image whose model representation is identical to that of the reference image (a "metamer", as originally defined in the literature on Trichromacy). This method investigates what image features the model disregards entirely.
  • Eigendistortions: given a model and a reference image, compute the image perturbation that produces the smallest and largest changes in the model response space. This method investigates the image features the model considers the least and most important.
  • Maximal differentiation (MAD) competition: given two metrics that measure distance between images and a reference image, generate pairs of images that optimally differentiate the models. Specifically, synthesize a pair of images that the first model says are equi-distant from the reference while the second model says they are maximally/minimally distant from the reference. Then synthesize a second pair with the roles of the two models reversed. This method allows for efficient comparison of two metrics, highlighting the aspects in which their sensitivities differ.

Models, Metrics, and Model Components

  • Portilla-Simoncelli texture model, which measures the statistical properties of visual textures, here defined as "repeating visual patterns."
  • Steerable pyramid, a multi-scale oriented image decomposition. The basis are oriented (steerable) filters, localized in space and frequency. Among other uses, the steerable pyramid serves as a good representation from which to build a primary visual cortex model. See the pyrtools documentation for more details on image pyramids in general and the steerable pyramid in particular.
  • Structural Similarity Index (SSIM), is a perceptual similarity metric, returning a number between -1 (totally different) and 1 (identical) reflecting how similar two images are. This is based on the images' luminance, contrast, and structure, which are computed convolutionally across the images.
  • Multiscale Structural Similarity Index (MS-SSIM), is a perceptual similarity metric similar to SSIM, except it operates at multiple scales (i.e., spatial frequencies).
  • Normalized Laplacian distance, is a perceptual distance metric based on transformations associated with the early visual system: local luminance subtraction and local contrast gain control, at six scales.

Getting help

We communicate via several channels on Github:

  • Discussions is the place to ask usage questions, discuss issues too broad for a single issue, or show off what you've made with plenoptic.
  • If you've come across a bug, open an issue.
  • If you have an idea for an extension or enhancement, please post in the ideas section of discussions first. We'll discuss it there and, if we decide to pursue it, open an issue to track progress.
  • See the contributing guide for how to get involved.

In all cases, please follow our code of conduct.

Citing us

If you use plenoptic in a published academic article or presentation, please cite both the code by the DOI as well the JOV paper. If you are not using the code, but just discussing the project, please cite the paper. You can click on Cite this repository on the right side of the GitHub page to get a copyable citation for the code, or use the following:

  • Code: DOI
  • Paper: bibtex @article{duong2023plenoptic, title={Plenoptic: A platform for synthesizing model-optimized visual stimuli}, author={Duong, Lyndon and Bonnen, Kathryn and Broderick, William and Fiquet, Pierre-{\'E}tienne and Parthasarathy, Nikhil and Yerxa, Thomas and Zhao, Xinyuan and Simoncelli, Eero}, journal={Journal of Vision}, volume={23}, number={9}, pages={5822--5822}, year={2023}, publisher={The Association for Research in Vision and Ophthalmology} }

See the citation guide for more details, including citations for the different synthesis methods and computational moels included in plenoptic.

Support

This package is supported by the Simons Foundation Flatiron Institute's Center for Computational Neuroscience.

[^1]: These methods also work with auditory models, such as in Feather et al., 2019, though we haven't yet implemented examples. If you're interested, please post in Discussions!

[^2]: Here and throughout the documentation, we use "image" to describe the input. The models and metrics that are included in plenoptic are intended to work on images, represented as 4d tensors. However, the synthesis methods should also work on videos (5d tensors), audio (3d tensors) and more! If you have a problem using a tensor with different dimensionality, please open an issue!

Owner

  • Name: Plenoptic
  • Login: plenoptic-org
  • Kind: organization
  • Location: United States of America

Plenoptic library for image synthesis and related repos

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it using the preferred-citation metadata."
title: "Plenoptic: A platform for synthesizing model-optimized visual stimuli"
authors:
  - family-names: Balzani
    given-names: Edoardo
    orcid: "https://orcid.org/0000-0002-3702-5856"
  - family-names: Bonnen
    given-names: Kathryn
    orcid: "https://orcid.org/0000-0002-9210-8275"
  - family-names: Broderick
    given-names: William
    orcid: "https://orcid.org/0000-0002-8999-9003"
  - family-names: Dettki
    given-names: Hanna
    orcid: "https://orcid.org/0009-0006-7237-6121"
  - family-names: Duong
    given-names: Lyndon
    orcid: "https://orcid.org/0000-0003-0575-1033"
  - family-names: Fiquet
    given-names: Pierre-Étienne
    orcid: "https://orcid.org/0000-0002-8301-2220"
  - family-names: Herrera-Esposito
    given-names: Daniel
    orcid: "https://orcid.org/0000-0001-8181-9787"
  - family-names: Parthasarathy
    given-names: Nikhil
    orcid: "https://orcid.org/0000-0003-2572-6492"
  - family-names: Simoncelli
    given-names: Eero
    orcid: "https://orcid.org/0000-0002-1206-527X"
  - family-names: Venditto
    given-names: Sarah Jo
    orcid: "https://orcid.org/0000-0003-4681-0538"
  - family-names: Viejo
    given-names: Guillaume
    orcid: "https://orcid.org/0000-0002-2450-7397"
  - family-names: Yerxa
    given-names: Thomas
    orcid: "https://orcid.org/0000-0003-2687-0816"
  - family-names: Zhao
    given-names: Xinyuan
    orcid: "https://orcid.org/0000-0003-1770-631X"
doi: 10.5281/zenodo.10151130
repository-code: "https://github.com/plenoptic-org/plenoptic"
url: "https://docs.plenoptic.org/"
license: MIT
keywords:
  - neuroscience
  - visual information processing
  - machine learning
  - explainability
  - computational models

GitHub Events

Total
  • Create event: 46
  • Release event: 3
  • Issues event: 11
  • Watch event: 12
  • Delete event: 42
  • Member event: 1
  • Issue comment event: 134
  • Push event: 248
  • Pull request event: 88
  • Pull request review comment event: 79
  • Pull request review event: 105
  • Fork event: 2
Last Year
  • Create event: 46
  • Release event: 3
  • Issues event: 11
  • Watch event: 12
  • Delete event: 42
  • Member event: 1
  • Issue comment event: 134
  • Push event: 248
  • Pull request event: 88
  • Pull request review comment event: 79
  • Pull request review event: 105
  • Fork event: 2

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 8
  • Total pull requests: 37
  • Average time to close issues: 11 days
  • Average time to close pull requests: 11 days
  • Total issue authors: 4
  • Total pull request authors: 6
  • Average comments per issue: 0.25
  • Average comments per pull request: 2.38
  • Merged pull requests: 30
  • Bot issues: 0
  • Bot pull requests: 5
Past Year
  • Issues: 8
  • Pull requests: 37
  • Average time to close issues: 11 days
  • Average time to close pull requests: 11 days
  • Issue authors: 4
  • Pull request authors: 6
  • Average comments per issue: 0.25
  • Average comments per pull request: 2.38
  • Merged pull requests: 30
  • Bot issues: 0
  • Bot pull requests: 5
Top Authors
Issue Authors
  • billbrod (10)
  • nrposner (1)
  • BalzaniEdoardo (1)
  • emeyer121 (1)
  • dherrera1911 (1)
  • NickleDave (1)
Pull Request Authors
  • billbrod (42)
  • pre-commit-ci[bot] (8)
  • pehf (2)
  • BalzaniEdoardo (2)
  • gviejo (1)
  • sjvenditto (1)
  • hmd101 (1)
  • yarikoptic (1)
  • TrellixVulnTeam (1)
Top Labels
Issue Labels
bug (4) good first issue (1) documentation (1)
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 892 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 6
  • Total maintainers: 1
pypi.org: plenoptic

Python library for model-based stimulus synthesis.

  • Versions: 6
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 892 Last month
Rankings
Dependent packages count: 7.3%
Stargazers count: 11.9%
Forks count: 14.5%
Average: 18.6%
Dependent repos count: 40.9%
Maintainers (1)
Last synced: 6 months ago

Dependencies

.github/workflows/ci.yml actions
  • FedericoCarboni/setup-ffmpeg v2 composite
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • codecov/codecov-action 858dd794fbb81941b6d60b0dca860878cba60fa9 composite
  • re-actors/alls-green afee1c1eac2a506084c274e9c02c8e0687b48d9e composite
.github/workflows/deploy.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • pypa/gh-action-pypi-publish release/v1 composite
jenkins/Dockerfile docker
  • nvidia/cuda 12.2.0-devel-ubuntu20.04 build
pyproject.toml pypi
  • einops >=0.3.0
  • imageio >=2.5
  • matplotlib >=3.3
  • numpy >=1.1
  • pyrtools >=1.0.1
  • scikit-image >=0.15.0
  • scipy >=1.0
  • torch >=1.8,!=1.12.0
  • tqdm >=4.29
environment.yml conda
  • ffmpeg
  • ipywidgets
  • pip
  • python 3.10.*
  • pytorch
  • torchvision