analogvnn

A fully modular framework for modeling and optimizing analog neural networks

https://github.com/vivswan/analogvnn

Science Score: 77.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 8 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
    1 of 5 committers (20.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.7%) to scientific vocabulary

Keywords

analog deep-learning framework neural-network photonics pytorch
Last synced: 4 months ago · JSON representation ·

Repository

A fully modular framework for modeling and optimizing analog neural networks

Basic Info
Statistics
  • Stars: 20
  • Watchers: 2
  • Forks: 5
  • Open Issues: 0
  • Releases: 9
Topics
analog deep-learning framework neural-network photonics pytorch
Created about 3 years ago · Last pushed over 1 year ago
Metadata Files
Readme Changelog License Citation Codeowners

README.md

AnalogVNN

arXiv AML Open In Colab

PyPI version Documentation Status Python License: MPL 2.0

Documentation: https://analogvnn.readthedocs.io/

Installation:

```bash # Current stable release for CPU and GPU pip install analogvnn

# For additional optional features pip install analogvnn[full] ```

Usage:

Open In Colab

Abstract

3 Layered Linear Photonic Analog Neural Network

AnalogVNN is a simulation framework built on PyTorch which can simulate the effects of optoelectronic noise, limited precision, and signal normalization present in photonic neural network accelerators. We use this framework to train and optimize linear and convolutional neural networks with up to 9 layers and ~1.7 million parameters, while gaining insights into how normalization, activation function, reduced precision, and noise influence accuracy in analog photonic neural networks. By following the same layer structure design present in PyTorch, the AnalogVNN framework allows users to convert most digital neural network models to their analog counterparts with just a few lines of code, taking full advantage of the open-source optimization, deep learning, and GPU acceleration libraries available through PyTorch.

AnalogVNN Paper: https://doi.org/10.1063/5.0134156

Citing AnalogVNN

We would appreciate if you cite the following paper in your publications for which you used AnalogVNN:

bibtex @article{shah2023analogvnn, title={AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks}, author={Shah, Vivswan and Youngblood, Nathan}, journal={APL Machine Learning}, volume={1}, number={2}, year={2023}, publisher={AIP Publishing} }

Or in textual form:

text Vivswan Shah, and Nathan Youngblood. "AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks." APL Machine Learning 1.2 (2023). DOI: 10.1063/5.0134156

Owner

  • Name: Vivswan Shah
  • Login: Vivswan
  • Kind: user
  • Company: University of Pittsburgh

PhD Student @ Upitt in Machine Learning and Quantum Computing

Citation (CITATION.cff)

cff-version: 1.2.0
title: 'AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks'
message: 'If you use this software, please cite it as below.'
preferred-citation:
  type: article
  authors:
    - given-names: Vivswan
      family-names: Shah
      email: vivswanshah@pitt.edu
      affiliation: University of Pittsburgh
    - family-names: Youngblood
      given-names: Nathan
      affiliation: University of Pittsburgh
  doi: "10.1063/5.0134156"
  journal: "APL Machine Learning"
  title: 'AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks'
  year: 2023
authors:
  - given-names: Vivswan
    family-names: Shah
    email: vivswanshah@pitt.edu
    affiliation: University of Pittsburgh
  - family-names: Youngblood
    given-names: Nathan
    affiliation: University of Pittsburgh
identifiers:
  - type: doi
    value: 10.1063/5.0134156
    description: >-
      The concept DOI for the collection containing
      all versions of the Citation File Format.
repository-code: 'https://github.com/Vivswan/AnalogVNN'
url: 'https://analogvnn.readthedocs.io/'
abstract: >-
  AnalogVNN, a simulation framework built on PyTorch
  which can simulate the effects of optoelectronic
  noise, limited precision, and signal normalization
  present in photonic neural network accelerators. We
  use this framework to train and optimize linear and
  convolutional neural networks with up to 9 layers
  and ~1.7 million parameters, while gaining insights
  into how normalization, activation function,
  reduced precision, and noise influence accuracy in
  analog photonic neural networks. By following the
  same layer structure design present in PyTorch, the
  AnalogVNN framework allows users to convert most
  digital neural network models to their analog
  counterparts with just a few lines of code, taking
  full advantage of the open-source optimization,
  deep learning, and GPU acceleration libraries
  available through PyTorch.
keywords:
  - photonics
  - neural-networks
  - analog-computing
  - deep-learning
license: MPL-2.0

GitHub Events

Total
  • Watch event: 5
  • Fork event: 2
Last Year
  • Watch event: 5
  • Fork event: 2

Committers

Last synced: almost 3 years ago

All Time
  • Total Commits: 283
  • Total Committers: 5
  • Avg Commits per committer: 56.6
  • Development Distribution Score (DDS): 0.3
Top Committers
Name Email Commits
Vivswan Shah 5****n@u****m 198
Vivswan Shah s****n@g****m 71
Vivswan Shah 5****n@u****m 10
Vivswan Shah v****h@p****u 3
Tianyi Zheng t****2@g****m 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 5 months ago

All Time
  • Total issues: 3
  • Total pull requests: 83
  • Average time to close issues: 4 days
  • Average time to close pull requests: 3 days
  • Total issue authors: 2
  • Total pull request authors: 2
  • Average comments per issue: 2.33
  • Average comments per pull request: 0.05
  • Merged pull requests: 74
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 21
  • Average time to close issues: 12 days
  • Average time to close pull requests: 6 days
  • Issue authors: 1
  • Pull request authors: 1
  • Average comments per issue: 5.0
  • Average comments per pull request: 0.14
  • Merged pull requests: 13
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • PierrickPochelu (2)
  • hatsuka20 (1)
Pull Request Authors
  • Vivswan (111)
  • tianyizheng02 (1)
Top Labels
Issue Labels
question (2)
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 113 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 16
  • Total maintainers: 1
pypi.org: analogvnn

A fully modular framework for modeling and optimizing analog/photonic neural networks

  • Versions: 16
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 113 Last month
Rankings
Dependent packages count: 6.6%
Downloads: 16.3%
Forks count: 17.3%
Average: 18.5%
Stargazers count: 21.8%
Dependent repos count: 30.6%
Maintainers (1)
Last synced: 5 months ago

Dependencies

requirements.txt pypi
  • build *
  • furo *
  • graphviz *
  • importlib_metadata *
  • johnnydep *
  • matplotlib *
  • myst_parser *
  • natsort *
  • networkx *
  • numpy >=1.16.5
  • pillow *
  • rst-to-myst *
  • scipy *
  • seaborn *
  • setuptools ==65.6.3
  • sphinx >=4.2.0
  • sphinx-autoapi *
  • sphinx-autobuild *
  • sphinx-copybutton *
  • sphinx-inline-tabs *
  • sphinx-notfound-page *
  • sphinx-rtd-theme *
  • sphinxcontrib-katex *
  • sphinxext-opengraph *
  • tabulate *
  • tensorboard *
  • tensorflow >=2.0.0
  • torch *
  • torchaudio *
  • torchvision *
  • torchviz *
pyproject.toml pypi
  • dataclasses *
  • importlib-metadata <5.0.0,>=2.0.0; python_version < '3.8'
  • networkx *
  • numpy >=1.16.5
  • scipy *
requirements/requirements-dev.txt pypi
  • build * development
  • johnnydep * development
  • setuptools >=61.0.0 development
requirements/requirements-docs.txt pypi
  • furo *
  • myst_parser *
  • rst-to-myst *
  • sphinx >=4.2.0
  • sphinx-autoapi *
  • sphinx-autobuild *
  • sphinx-copybutton *
  • sphinx-inline-tabs *
  • sphinx-notfound-page *
  • sphinx-rtd-theme *
  • sphinxcontrib-katex *
  • sphinxext-opengraph *
requirements/requirements-test.txt pypi
  • flake8 * test
  • flake8-bugbear * test
  • flake8-coding * test
  • flake8-comprehensions * test
  • flake8-deprecated * test
  • flake8-docstrings * test
  • flake8-executable * test
  • flake8-quotes * test
  • flake8-return * test
requirements-all.txt pypi