analogvnn
A fully modular framework for modeling and optimizing analog neural networks
Science Score: 77.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 8 DOI reference(s) in README -
✓Academic publication links
Links to: arxiv.org -
✓Committers with academic emails
1 of 5 committers (20.0%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (11.7%) to scientific vocabulary
Keywords
Repository
A fully modular framework for modeling and optimizing analog neural networks
Basic Info
- Host: GitHub
- Owner: Vivswan
- License: other
- Language: Python
- Default Branch: master
- Homepage: https://analogvnn.readthedocs.io
- Size: 3.24 MB
Statistics
- Stars: 20
- Watchers: 2
- Forks: 5
- Open Issues: 0
- Releases: 9
Topics
Metadata Files
README.md
AnalogVNN
Documentation: https://analogvnn.readthedocs.io/
Installation:
```bash # Current stable release for CPU and GPU pip install analogvnn
# For additional optional features pip install analogvnn[full] ```
Usage:
- Sample code with AnalogVNN: sample_code.py
- Sample code without AnalogVNN: samplecodenon_analog.py
- Sample code with AnalogVNN and Logs: samplecodewith_logs.py
- Jupyter Notebook: AnalogVNN_Demo.ipynb
Abstract

AnalogVNN is a simulation framework built on PyTorch which can simulate the effects of optoelectronic noise, limited precision, and signal normalization present in photonic neural network accelerators. We use this framework to train and optimize linear and convolutional neural networks with up to 9 layers and ~1.7 million parameters, while gaining insights into how normalization, activation function, reduced precision, and noise influence accuracy in analog photonic neural networks. By following the same layer structure design present in PyTorch, the AnalogVNN framework allows users to convert most digital neural network models to their analog counterparts with just a few lines of code, taking full advantage of the open-source optimization, deep learning, and GPU acceleration libraries available through PyTorch.
AnalogVNN Paper: https://doi.org/10.1063/5.0134156
Citing AnalogVNN
We would appreciate if you cite the following paper in your publications for which you used AnalogVNN:
bibtex
@article{shah2023analogvnn,
title={AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks},
author={Shah, Vivswan and Youngblood, Nathan},
journal={APL Machine Learning},
volume={1},
number={2},
year={2023},
publisher={AIP Publishing}
}
Or in textual form:
text
Vivswan Shah, and Nathan Youngblood. "AnalogVNN: A fully modular framework for modeling
and optimizing photonic neural networks." APL Machine Learning 1.2 (2023).
DOI: 10.1063/5.0134156
Owner
- Name: Vivswan Shah
- Login: Vivswan
- Kind: user
- Company: University of Pittsburgh
- Website: vivswan.github.io
- Repositories: 5
- Profile: https://github.com/Vivswan
PhD Student @ Upitt in Machine Learning and Quantum Computing
Citation (CITATION.cff)
cff-version: 1.2.0
title: 'AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks'
message: 'If you use this software, please cite it as below.'
preferred-citation:
type: article
authors:
- given-names: Vivswan
family-names: Shah
email: vivswanshah@pitt.edu
affiliation: University of Pittsburgh
- family-names: Youngblood
given-names: Nathan
affiliation: University of Pittsburgh
doi: "10.1063/5.0134156"
journal: "APL Machine Learning"
title: 'AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks'
year: 2023
authors:
- given-names: Vivswan
family-names: Shah
email: vivswanshah@pitt.edu
affiliation: University of Pittsburgh
- family-names: Youngblood
given-names: Nathan
affiliation: University of Pittsburgh
identifiers:
- type: doi
value: 10.1063/5.0134156
description: >-
The concept DOI for the collection containing
all versions of the Citation File Format.
repository-code: 'https://github.com/Vivswan/AnalogVNN'
url: 'https://analogvnn.readthedocs.io/'
abstract: >-
AnalogVNN, a simulation framework built on PyTorch
which can simulate the effects of optoelectronic
noise, limited precision, and signal normalization
present in photonic neural network accelerators. We
use this framework to train and optimize linear and
convolutional neural networks with up to 9 layers
and ~1.7 million parameters, while gaining insights
into how normalization, activation function,
reduced precision, and noise influence accuracy in
analog photonic neural networks. By following the
same layer structure design present in PyTorch, the
AnalogVNN framework allows users to convert most
digital neural network models to their analog
counterparts with just a few lines of code, taking
full advantage of the open-source optimization,
deep learning, and GPU acceleration libraries
available through PyTorch.
keywords:
- photonics
- neural-networks
- analog-computing
- deep-learning
license: MPL-2.0
GitHub Events
Total
- Watch event: 5
- Fork event: 2
Last Year
- Watch event: 5
- Fork event: 2
Committers
Last synced: almost 3 years ago
All Time
- Total Commits: 283
- Total Committers: 5
- Avg Commits per committer: 56.6
- Development Distribution Score (DDS): 0.3
Top Committers
| Name | Commits | |
|---|---|---|
| Vivswan Shah | 5****n@u****m | 198 |
| Vivswan Shah | s****n@g****m | 71 |
| Vivswan Shah | 5****n@u****m | 10 |
| Vivswan Shah | v****h@p****u | 3 |
| Tianyi Zheng | t****2@g****m | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 5 months ago
All Time
- Total issues: 3
- Total pull requests: 83
- Average time to close issues: 4 days
- Average time to close pull requests: 3 days
- Total issue authors: 2
- Total pull request authors: 2
- Average comments per issue: 2.33
- Average comments per pull request: 0.05
- Merged pull requests: 74
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 1
- Pull requests: 21
- Average time to close issues: 12 days
- Average time to close pull requests: 6 days
- Issue authors: 1
- Pull request authors: 1
- Average comments per issue: 5.0
- Average comments per pull request: 0.14
- Merged pull requests: 13
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- PierrickPochelu (2)
- hatsuka20 (1)
Pull Request Authors
- Vivswan (111)
- tianyizheng02 (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 113 last-month
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 16
- Total maintainers: 1
pypi.org: analogvnn
A fully modular framework for modeling and optimizing analog/photonic neural networks
- Homepage: https://github.com/Vivswan/AnalogVNN
- Documentation: https://analogvnn.readthedocs.io/en/latest/
- License: Mozilla Public License 2.0 (MPL 2.0)
-
Latest release: 1.0.8
published over 1 year ago
Rankings
Maintainers (1)
Dependencies
- build *
- furo *
- graphviz *
- importlib_metadata *
- johnnydep *
- matplotlib *
- myst_parser *
- natsort *
- networkx *
- numpy >=1.16.5
- pillow *
- rst-to-myst *
- scipy *
- seaborn *
- setuptools ==65.6.3
- sphinx >=4.2.0
- sphinx-autoapi *
- sphinx-autobuild *
- sphinx-copybutton *
- sphinx-inline-tabs *
- sphinx-notfound-page *
- sphinx-rtd-theme *
- sphinxcontrib-katex *
- sphinxext-opengraph *
- tabulate *
- tensorboard *
- tensorflow >=2.0.0
- torch *
- torchaudio *
- torchvision *
- torchviz *
- dataclasses *
- importlib-metadata <5.0.0,>=2.0.0; python_version < '3.8'
- networkx *
- numpy >=1.16.5
- scipy *
- build * development
- johnnydep * development
- setuptools >=61.0.0 development
- furo *
- myst_parser *
- rst-to-myst *
- sphinx >=4.2.0
- sphinx-autoapi *
- sphinx-autobuild *
- sphinx-copybutton *
- sphinx-inline-tabs *
- sphinx-notfound-page *
- sphinx-rtd-theme *
- sphinxcontrib-katex *
- sphinxext-opengraph *
- flake8 * test
- flake8-bugbear * test
- flake8-coding * test
- flake8-comprehensions * test
- flake8-deprecated * test
- flake8-docstrings * test
- flake8-executable * test
- flake8-quotes * test
- flake8-return * test