rnnreactivation

Code for "Sufficient conditions for offline reactivation in recurrent neural networks" (ICLR 2024)

https://github.com/nandahkrishna/rnnreactivation

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.3%) to scientific vocabulary

Keywords

attractor-learning attractor-network computational-neuroscience diffusion head-direction iclr iclr2024 langevin-dynamics langevin-mc learning path-integration pytorch reactivation recurrent-networks recurrent-neural-nets recurrent-neural-network recurrent-neural-networks replay rnn spatial-navigation
Last synced: 6 months ago · JSON representation ·

Repository

Code for "Sufficient conditions for offline reactivation in recurrent neural networks" (ICLR 2024)

Basic Info
Statistics
  • Stars: 2
  • Watchers: 2
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
attractor-learning attractor-network computational-neuroscience diffusion head-direction iclr iclr2024 langevin-dynamics langevin-mc learning path-integration pytorch reactivation recurrent-networks recurrent-neural-nets recurrent-neural-network recurrent-neural-networks replay rnn spatial-navigation
Created almost 2 years ago · Last pushed 9 months ago
Metadata Files
Readme License Citation

README.md

Modeling Offline Reactivation or Replay in RNNs

:pagefacingup: PDF :globewithmeridians: arXiv :ballotboxwithcheck: OpenReview :barchart: Poster :movie_camera: Presentation

This repository contains code to reproduce experiments and figures from the paper "Sufficient conditions for offline reactivation in recurrent neural networks" published at ICLR 2024.

Abstract

During periods of quiescence, such as sleep, neural activity in many brain circuits resembles that observed during periods of task engagement. However, the precise conditions under which task-optimized networks can autonomously reactivate the same network states responsible for online behavior is poorly understood. In this study, we develop a mathematical framework that outlines sufficient conditions for the emergence of neural reactivation in circuits that encode features of smoothly varying stimuli. We demonstrate mathematically that noisy recurrent networks optimized to track environmental state variables using change-based sensory information naturally develop denoising dynamics, which, in the absence of input, cause the network to revisit state configurations observed during periods of online activity. We validate our findings using numerical experiments on two canonical neuroscience tasks: spatial position estimation based on self-motion cues, and head direction estimation based on angular velocity cues. Overall, our work provides theoretical support for modeling offline reactivation as an emergent consequence of task optimization in noisy neural circuits.

Keywords: computational neuroscience, offline reactivation, replay, recurrent neural networks, path integration, noise

Setup

Create a Python 3.10 virtual environment, and run the following command:

zsh pip install -r requirements.txt

Experiments

To train a model for a specific configuration, you must run train.py with the right options:

zsh python train.py config=CONFIG_NAME seed=SEED [...]

For example, to train a noisy vanilla RNN on the unbiased spatial navigation task, you may run:

zsh python train.py config=spatial_navigation/noisy_unbiased seed=0

You may change the seed and any other hyperparameters or configuration variables either in the .yml files in configs/train, or pass them as command-line arguments. For example:

zsh python train.py config=spatial_navigation/noisy_unbiased seed=2 rnn.sigma2_rec=0.0003 trainer.n_epochs=1000 task.place_cells_num=256

After training models with different configurations and seeds, you may run the analysis scripts. Default arguments for these scripts are specified in the .yml files in configs/analysis. You may edit these configuration files or override default values using command-line arguments. For example, for models trained with seed 0 you may run:

zsh python output_kl.py seed=0 python output_variance.py seed=0 python output_distance.py seed=0

You may then use the Jupyter Notebooks in notebooks to visualize the results.

Reproducing Results

[!NOTE] While it is possible to reproduce results from the paper overall, in practice the numbers and plots may not match exactly due to differences in hardware, versions of CUDA, etc.

To reproduce experiments from the paper, run the following commands:

```zsh for seed in {0..4}; do # Train vanilla RNNs on spatial navigation task python train.py config=spatialnavigation/noisyunbiased seed=$seed python train.py config=spatialnavigation/noisybiased seed=$seed python train.py config=spatialnavigation/no-noiseunbiased seed=$seed python train.py config=spatialnavigation/no-noisebiased seed=$seed

# Analyze vanilla RNNs trained on spatial navigation task
python output_kl.py seed=$seed
python output_variance.py seed=$seed
python output_distance.py seed=$seed

# Train GRUs on spatial navigation task
python train.py config=spatial_navigation/noisy_unbiased_gru seed=$seed
python train.py config=spatial_navigation/noisy_biased_gru seed=$seed

# Train vanilla RNNs on head direction task
python train.py config=head_direction/noisy_unbiased seed=$seed
python train.py config=head_direction/noisy_biased seed=$seed

done ```

Plots may then be generated using notebooks/SpatialNavigation.ipynb and notebooks/HeadDirection.ipynb.

License

This codebase is licensed under the BSD 3-Clause License (SPDX: BSD-3-Clause). Refer to LICENSE.md for details.

Citation

If this code was useful to you, please consider citing our work:

bibtex @inproceedings{krishna2024sufficient, title={Sufficient conditions for offline reactivation in recurrent neural networks}, author={Nanda H Krishna and Colin Bredenberg and Daniel Levenstein and Blake Aaron Richards and Guillaume Lajoie}, booktitle={The Twelfth International Conference on Learning Representations}, year={2024}, url={https://openreview.net/forum?id=RVrINT6MT7} }

Owner

  • Name: Nanda H Krishna
  • Login: nandahkrishna
  • Kind: user
  • Location: Chennai & Montréal
  • Company: Mila – Quebec AI Institute (@mila-iqia) & Université de Montréal

🧠🤖

Citation (CITATION.cff)

cff-version: 1.2.0
title: >-
  Sufficient conditions for offline reactivation in
  recurrent neural networks
message: >-
  If this code was useful to you, please consider citing our
  work.
type: software
authors:
  - family-names: Krishna
    given-names: Nanda H
    orcid: "https://orcid.org/0000-0001-8036-2789"
  - family-names: Bredenberg
    given-names: Colin
    orcid: "https://orcid.org/0000-0002-9749-9228"
  - family-names: Levenstein
    given-names: Daniel
    orcid: "https://orcid.org/0000-0002-5507-9145"
  - family-names: Richards
    given-names: Blake Aaron
    orcid: "https://orcid.org/0000-0001-9662-2151"
  - family-names: Lajoie
    given-names: Guillaume
    orcid: "https://orcid.org/0000-0003-2730-7291"
identifiers:
  - type: url
    value: "https://openreview.net/forum?id=RVrINT6MT7"
    description: Paper
repository-code: "https://github.com/nandahkrishna/RNNReactivation"
abstract: >-
  During periods of quiescence, such as sleep, neural
  activity in many brain circuits resembles that observed
  during periods of task engagement. However, the precise
  conditions under which task-optimized networks can
  autonomously reactivate the same network states
  responsible for online behavior is poorly understood. In
  this study, we develop a mathematical framework that
  outlines sufficient conditions for the emergence of neural
  reactivation in circuits that encode features of smoothly
  varying stimuli. We demonstrate mathematically that noisy
  recurrent networks optimized to track environmental state
  variables using change-based sensory information naturally
  develop denoising dynamics, which, in the absence of
  input, cause the network to revisit state configurations
  observed during periods of online activity. We validate
  our findings using numerical experiments on two canonical
  neuroscience tasks: spatial position estimation based on
  self-motion cues, and head direction estimation based on
  angular velocity cues. Overall, our work provides
  theoretical support for modeling offline reactivation as
  an emergent consequence of task optimization in noisy
  neural circuits.
keywords:
  - computational neuroscience
  - offline reactivation
  - replay
  - recurrent neural networks
  - path integration
  - noise
license: BSD-3-Clause
preferred-citation:
  type: conference-paper
  authors:
    - family-names: Krishna
      given-names: Nanda H
      orcid: "https://orcid.org/0000-0001-8036-2789"
    - family-names: Bredenberg
      given-names: Colin
      orcid: "https://orcid.org/0000-0002-9749-9228"
    - family-names: Levenstein
      given-names: Daniel
      orcid: "https://orcid.org/0000-0002-5507-9145"
    - family-names: Richards
      given-names: Blake Aaron
      orcid: "https://orcid.org/0000-0001-9662-2151"
    - family-names: Lajoie
      given-names: Guillaume
      orcid: "https://orcid.org/0000-0003-2730-7291"
  title: >-
    Sufficient conditions for offline reactivation in
    recurrent neural networks
  collection-title: >-
    The Twelfth International Conference on Learning
    Representations
  year: 2024
  url: "https://openreview.net/forum?id=RVrINT6MT7"

GitHub Events

Total
  • Issues event: 2
  • Watch event: 2
  • Issue comment event: 1
  • Push event: 11
Last Year
  • Issues event: 2
  • Watch event: 2
  • Issue comment event: 1
  • Push event: 11

Committers

Last synced: 11 months ago

All Time
  • Total Commits: 6
  • Total Committers: 1
  • Avg Commits per committer: 6.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 6
  • Committers: 1
  • Avg Commits per committer: 6.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Nanda H Krishna me@n****m 6
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 1
  • Total pull requests: 0
  • Average time to close issues: 7 days
  • Average time to close pull requests: N/A
  • Total issue authors: 1
  • Total pull request authors: 0
  • Average comments per issue: 2.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 0
  • Average time to close issues: 7 days
  • Average time to close pull requests: N/A
  • Issue authors: 1
  • Pull request authors: 0
  • Average comments per issue: 2.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • doujiarui (1)
Pull Request Authors
Top Labels
Issue Labels
question (1)
Pull Request Labels

Dependencies

requirements.txt pypi
  • matplotlib ==3.7.2
  • numpy ==1.23.5
  • omegaconf ==2.3.0
  • scikit-learn ==1.2.2
  • scipy ==1.10.1
  • torch ==1.13.0