gelureluinterpolation
Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments
Science Score: 67.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 1 DOI reference(s) in README -
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (7.8%) to scientific vocabulary
Repository
Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments
Basic Info
- Host: GitHub
- Owner: Vivswan
- License: mit
- Language: Python
- Default Branch: main
- Homepage: https://arxiv.org/abs/2402.02593
- Size: 87.9 KB
Statistics
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
GeLUReLUInterpolation
This is the official repository for the paper:
Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments
Requirements
The following packages are required to run the simulation:
- Python 3.6+
- PyTorch 2.0.0+
- Tensorboard
- Other required python packages are listed in requirements.txt file.
Run the simulation
Datasets
- CIFAR-10 and CIFAR-100 datasets are used in the experiments. The datasets are automatically downloaded by the PyTorch library.
Models
- ConvNet: Model with 6 convolutional layers and 3 fully connected layers. Run this model using
src/run_conv.pyscript. - ResNet: Run this model using
src/run_resnet.pyscript. - VGG: Run this model using
src/run_vgg.pyscript. - ViT: Run this model using
src/run_vit.pyscript.
Cite
We would appreciate if you cite the following paper in your publications if you find this code useful:
bibtex
@article{shah2024leveraging,
title={Leveraging Continuously Differentiable Activation Functions for Learning in Quantized Noisy Environments},
author={Shah, Vivswan and Youngblood, Nathan},
journal={arXiv preprint arXiv:2402.02593},
url = {http://arxiv.org/abs/2402.02593},
doi = {10.48550/arXiv.2402.02593},
year={2024}
}
Or in textual form:
text
Shah, Vivswan, and Nathan Youngblood. "Leveraging Continuously Differentiable Activation
Functions for Learning in Quantized Noisy Environments." arXiv preprint arXiv:2402.02593 (2024).
Owner
- Name: Vivswan Shah
- Login: Vivswan
- Kind: user
- Company: University of Pittsburgh
- Website: vivswan.github.io
- Repositories: 5
- Profile: https://github.com/Vivswan
PhD Student @ Upitt in Machine Learning and Quantum Computing
Citation (CITATION.cff)
cff-version: 1.2.0
message: If you use this software, please cite both the article from preferred-citation and the software itself.
authors:
- family-names: Shah
given-names: Vivswan
- family-names: Youngblood
given-names: Nathan
title: Leveraging Continuously Differentiable Activation Functions for Learning in Quantized Noisy Environments
version: 1.0.0
url: http://arxiv.org/abs/2402.02593
doi: 10.48550/arXiv.2402.02593
date-released: '2024-12-09'
preferred-citation:
authors:
- family-names: Shah
given-names: Vivswan
- family-names: Youngblood
given-names: Nathan
title: Leveraging Continuously Differentiable Activation Functions for Learning in Quantized Noisy Environments
doi: 10.48550/arXiv.2402.02593
url: http://arxiv.org/abs/2402.02593
type: article-journal
year: '2024'
conference: {}
publisher: {}
GitHub Events
Total
- Watch event: 1
- Public event: 1
- Push event: 2
Last Year
- Watch event: 1
- Public event: 1
- Push event: 2
Dependencies
- analogvnn *
- einops *
- graphviz *
- matplotlib *
- natsort *
- numpy *
- portalocker *
- scipy *
- seaborn *
- tabulate *
- torchdata *
- torchinfo *
- torchviz *
- tqdm *