biological-hsic

A biologically plausible implementation of the HSIC objective for training SNNs

https://github.com/darsnack/biological-hsic

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 6 DOI reference(s) in README
  • Academic publication links
    Links to: frontiersin.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.9%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

A biologically plausible implementation of the HSIC objective for training SNNs

Basic Info
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created almost 5 years ago · Last pushed over 1 year ago
Metadata Files
Readme License Citation

README.md

Information Bottleneck-Based Hebbian Learning Rule Naturally Ties Working Memory and Synaptic Updates

A biologically plausible implementation of the HSIC objective. Please cite our work if you use this codebase for research purposes: bibtex @article{10.3389/fncom.2024.1240348, author = {Daruwalla, Kyle and Lipasti, Mikko}, doi = {10.3389/fncom.2024.1240348}, issn = {1662-5188}, journal = {Frontiers in Computational Neuroscience}, title = {Information bottleneck-based Hebbian learning rule naturally ties working memory and synaptic updates}, url = {https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2024.1240348}, volume = {18}, year = {2024}, bdsk-url-1 = {https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2024.1240348}, bdsk-url-2 = {https://doi.org/10.3389/fncom.2024.1240348} }

Setup instructions

To install the codebase, install Micromamba (or Conda), then run: shell micromamba create -f environment.yaml poetry install --no-root

Running the experiments

The code for running the experiments is located in the root directory with auxiliary code located in projectlib. Configuration files use Hydra and are found in configs.

Small-scale experiments

To run the reservoir experiment, activate the project environment, and run: shell python train-reservoir.py The results will be stored under outputs/train-reservoir.

To run the linear synthetic dataset experiment, activate the project environment, and run: shell python train-lif-biohsic.py data=linear The results will be stored under output/train-lif-biohsic.

To run the XOR synthetic dataset experiment, activate the project environment, and run: shell python train-lif-biohsic.py data=xor The results will be stored under output/train-lif-biohsic.

Large-scale experiments

To run the MNIST experiment, activate the project environment, and run: shell python train-biohsic.py data=mnist The results will be stored under output/train-biohsic.

To run the CIFAR-10 experiment, activate the project environment, and run: shell python train-biohsic.py data=cifar10 model=cnn The results will be stored under output/train-biohsic.

The back-propagation baselines can be run using the same commands but with the train-bp.py script.

Plot the results

The plotting.ipynb can recreate the plots given the results stored in the output directories above. You may need to adjust the paths in the notebook to your results.

Owner

  • Name: Kyle Daruwalla
  • Login: darsnack
  • Kind: user
  • Location: Cold Spring Harbor Lab, NY

NeuroAI scholar at CSHL

Citation (CITATION.bib)

@article{10.3389/fncom.2024.1240348,
	abstract = {<p>Deep neural feedforward networks are effective models for a wide array of problems, but training and deploying such networks presents a significant energy cost. Spiking neural networks (SNNs), which are modeled after biologically realistic neurons, offer a potential solution when deployed correctly on neuromorphic computing hardware. Still, many applications train SNNs <italic>offline</italic>, and running network training directly on neuromorphic hardware is an ongoing research problem. The primary hurdle is that back-propagation, which makes training such artificial deep networks possible, is biologically implausible. Neuroscientists are uncertain about how the brain would propagate a precise error signal backward through a network of neurons. Recent progress addresses part of this question, e.g., the weight transport problem, but a complete solution remains intangible. In contrast, novel learning rules based on the information bottleneck (IB) train each layer of a network independently, circumventing the need to propagate errors across layers. Instead, propagation is implicit due the layers' feedforward connectivity. These rules take the form of a three-factor Hebbian update a global error signal modulates local synaptic updates within each layer. Unfortunately, the global signal for a given layer requires processing multiple samples concurrently, and the brain only sees a single sample at a time. We propose a new three-factor update rule where the global signal correctly captures information across samples via an auxiliary memory network. The auxiliary network can be trained <italic>a priori</italic> independently of the dataset being used with the primary network. We demonstrate comparable performance to baselines on image classification tasks. Interestingly, unlike back-propagation-like schemes where there is no link between learning and memory, our rule presents a direct connection between working memory and synaptic updates. To the best of our knowledge, this is the first rule to make this link explicit. We explore these implications in initial experiments examining the effect of memory capacity on learning performance. Moving forward, this work suggests an alternate view of learning where each layer balances memory-informed compression against task performance. This view naturally encompasses several key aspects of neural computation, including memory, efficiency, and locality.</p>},
	author = {Daruwalla, Kyle and Lipasti, Mikko},
	doi = {10.3389/fncom.2024.1240348},
	issn = {1662-5188},
	journal = {Frontiers in Computational Neuroscience},
	title = {Information bottleneck-based Hebbian learning rule naturally ties working memory and synaptic updates},
	url = {https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2024.1240348},
	volume = {18},
	year = {2024},
	bdsk-url-1 = {https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2024.1240348},
	bdsk-url-2 = {https://doi.org/10.3389/fncom.2024.1240348}
}

GitHub Events

Total
  • Issue comment event: 1
  • Push event: 2
  • Pull request review event: 1
  • Pull request event: 4
  • Fork event: 2
Last Year
  • Issue comment event: 1
  • Push event: 2
  • Pull request review event: 1
  • Pull request event: 4
  • Fork event: 2

Issues and Pull Requests

Last synced: 12 months ago

All Time
  • Total issues: 0
  • Total pull requests: 3
  • Average time to close issues: N/A
  • Average time to close pull requests: 24 days
  • Total issue authors: 0
  • Total pull request authors: 1
  • Average comments per issue: 0
  • Average comments per pull request: 2.33
  • Merged pull requests: 3
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 3
  • Average time to close issues: N/A
  • Average time to close pull requests: 24 days
  • Issue authors: 0
  • Pull request authors: 1
  • Average comments per issue: 0
  • Average comments per pull request: 2.33
  • Merged pull requests: 3
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
  • h6197627 (6)
Top Labels
Issue Labels
Pull Request Labels