hebbRNN

hebbRNN: A Reward-Modulated Hebbian Learning Rule for Recurrent Neural Networks - Published in JOSS (2016)

https://github.com/jonathanamichaels/hebbrnn

Science Score: 93.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 11 DOI reference(s) in README and JOSS metadata
  • Academic publication links
    Links to: biorxiv.org, joss.theoj.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
    Published in Journal of Open Source Software
Last synced: 6 months ago · JSON representation

Repository

A Reward-Modulated Hebbian Learning Rule for Recurrent Neural Networks

Basic Info
  • Host: GitHub
  • Owner: JonathanAMichaels
  • License: gpl-3.0
  • Language: MATLAB
  • Default Branch: master
  • Homepage:
  • Size: 43.9 KB
Statistics
  • Stars: 34
  • Watchers: 3
  • Forks: 8
  • Open Issues: 0
  • Releases: 3
Created almost 10 years ago · Last pushed over 4 years ago
Metadata Files
Readme License

README.md

hebbRNN: A Reward-Modulated Hebbian Learning Rule for Recurrent Neural Networks

Authors: Jonathan A. Michaels & Hansjörg Scherberger

Version: 1.3

Date: 23.09.2016

DOI

What is hebbRNN?

How does our brain learn to produce the large, impressive, and flexible array of motor behaviors we possess? In recent years, there has been renewed interest in modeling complex human behaviors such as memory and motor skills using neural networks. However, training these networks to produce meaningful behavior has proven difficult. Furthermore, the most common methods are generally not biologically-plausible and rely on information not local to the synapses of individual neurons as well as instantaneous reward signals.

The current package is a Matlab implementation of a biologically-plausible training rule for recurrent neural networks using a delayed and sparse reward signal. On individual trials, input is perturbed randomly at the synapses of individual neurons and these potential weight changes are accumulated in a Hebbian manner (multiplying pre- and post-synaptic weights) in an eligibility trace. At the end of each trial, a reward signal is determined based on the overall performance of the network in achieving the desired goal, and this reward is compared to the expected reward. The difference between the observed and expected reward is used in combination with the eligibility trace to strengthen or weaken corresponding synapses within the network, leading to proper network performance over time.

Documentation & Examples

All functions are documented throughout, and two examples illustrating the intended use of the package are provided with the release.

Example: a delayed nonmatch-to-sample task

In the delayed nonmatch-to-sample task the network receives two temporally separated inputs. Each input lasts 200ms and there is a 200ms gap between them. The goal of the task is to respond with one value if the inputs were identical, and a different value if they were not. This response must be independent of the order of the signals and therefore requires the network to remember the first input!

related file: hebbRNNExampleDNMS.m

Example: a center-out reaching task

In the center-out reaching task the network needs to produce the joint angle velocities of a two-segment arm to reach to a number of peripheral targets spaced along a circle in the 2D plane, based on the desired target specified by the input.

related file: hebbRNNExampleCO.m

Installation Instructions

The code package runs in Matlab, and should be compatible with any version. To install the package, simply add all folders and subfolders to the Matlab path using the set path option.

Dependencies

The hebbRNN repository has no dependencies beyond built-in Matlab functions.

Citation

If used in published work, please cite the work as:

Jonathan A. Michaels, Hansjörg Scherberger (2016). hebbRNN: A Reward-Modulated Hebbian Learning Rule for Recurrent Neural Networks. The Journal of Open Source Software. doi:http://dx.doi.org/10.21105/joss.00060

In addition, please cite the most recent version of the paper acknowledged below.

Acknowledgements

The network training method used in hebbRNN is based on Flexible decision ­making in recurrent neural networks trained with a biologically plausible rule by Thomas Miconi.

Owner

  • Name: Jonathan A Michaels
  • Login: JonathanAMichaels
  • Kind: user
  • Location: London, Ontario, Canada

JOSS Publication

hebbRNN: A Reward-Modulated Hebbian Learning Rule for Recurrent Neural Networks
Published
September 23, 2016
Volume 1, Issue 5, Page 60
Authors
Jonathan A. Michaels ORCID
German Primate Center, Göttingen, Germany
Hansjörg Scherberger ORCID
German Primate Center, Göttingen, Germany, Biology Department, University of Göttingen, Germany
Editor
Arfon Smith ORCID
Tags
learning plasticity neural network Hebbian RNN

GitHub Events

Total
Last Year

Committers

Last synced: 7 months ago

All Time
  • Total Commits: 24
  • Total Committers: 5
  • Avg Commits per committer: 4.8
  • Development Distribution Score (DDS): 0.458
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
JonathanAMichaels j****s@J****l 13
JonathanAMichaels j****s@J****l 5
JonathanAMichaels J****s@g****m 3
JonathanAMichaels J****n@J****l 2
Arfon Smith a****n 1

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 1
  • Total pull requests: 2
  • Average time to close issues: 11 days
  • Average time to close pull requests: 42 minutes
  • Total issue authors: 1
  • Total pull request authors: 2
  • Average comments per issue: 2.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 2
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • brylie (1)
Pull Request Authors
  • JonathanAMichaels (1)
  • arfon (1)
Top Labels
Issue Labels
Pull Request Labels