FixedPointFinder
FixedPointFinder: A Tensorflow toolbox for identifying and characterizing fixed points in recurrent neural networks - Published in JOSS (2018)
Science Score: 95.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 6 DOI reference(s) in README and JOSS metadata -
✓Academic publication links
Links to: joss.theoj.org -
✓Committers with academic emails
1 of 4 committers (25.0%) from academic institutions -
○Institutional organization owner
-
✓JOSS paper metadata
Published in Journal of Open Source Software
Scientific Fields
Repository
FixedPointFinder: A Tensorflow toolbox for identifying and characterizing fixed points in recurrent neural networks
Basic Info
Statistics
- Stars: 100
- Watchers: 5
- Forks: 34
- Open Issues: 7
- Releases: 9
Metadata Files
README.md
FixedPointFinder - A PyTorch / TensorFlow toolbox for finding fixed points and linearized dynamics in recurrent neural networks
Finds and analyzes the fixed points of recurrent neural networks that have been built using Tensorflow.
If you are using FixedPointFinder in research to be published, please cite our accompanying paper in your publication:
Golub and Sussillo (2018), "FixedPointFinder: A TensorFlow toolbox for identifying and characterizing fixed points in recurrent neural networks," Journal of Open Source Software, 3(31), 1003, https://doi.org/10.21105/joss.01003 .
Recommended Installation
- Clone or download this repository.
- Create a virtual environment for the required dependencies:
To create a new virtual environment specific, enter at the command line:
bash $ python3 -m venv --system-site-packages your-virtual-env-namewhereyour-virtual-env-nameis a path to the the virtual environment you would like to create (e.g.:/home/fpf). Then activate your new virtual environment:bash $ source your-virtual-env-name/bin/activateWhen you are finished working in your virtual environment (not now), enter:bash $ deactivate Automatically assemble all dependencies using
pipand therequirements*.txtfiles.For PyTorch, use:
bash $ pip install -r requirements-torch.txtFor TensorFlow, use:
bash $ pip install -r requirements-tf.txt
Advanced Installation
Advanced Python users and those wishing to develop contributions may prefer a custom install. Such installs should adhere to the following general template:
Clone or download this repository.
Install compatible versions of the following prerequisites.
- NumPy, SciPy, Matplotlib (install SciPy stack, contains all of them).
Scikit-learn (install).
TensorFlow (recommended version: 2.8; requires at least version 1.14; versions beyond 2.8 are not currently supported) (install).
RecurrentWhisperer (install).
Add the directories for
FixedPointFinderandRecurrentWhispererto your Python path:bash $ export PYTHONPATH=$PYTHONPATH:/path/to/your/directory/fixed-point-finder/ $ export PYTHONPATH=$PYTHONPATH:/path/to/your/directory/recurrent-whisperer/where "/path/to/your/directory" is replaced with the path to the corresponding repository. This step must be performed each time you launch a new terminal to work with
FixedPointFinder, and thus you may want to add the lines above to a startup script (e.g., the .bashrc / .bashprofile script in your home folder or an activate script in your virtual environment).
Example
FixedPointFinder includes an end-to-end example, implemented separately in PyTorch and TensorFlow, that trains an RNN to solve a task and then identifies and visualizes the fixed points of the trained RNN. To run the example, descend into the example directory: fixed-point-finder/examples/ and execute:
For PyTorch:
```bash
python runFlipFloptorch.py ```
For TensorFlow:
```bash
python runFlipFloptf.py ```
The task is the "flip-flop" task previously described in Sussillo and Barak (2013). Briefly, the task is to implement a 3-bit binary memory, in which each of 3 input channels delivers signed transient pulses (-1 or +1) to a corresponding bit of the memory, and an input pulse flips the state of that memory bit (also -1 or +1) whenever a pulse's sign is opposite of the current state of the bit. The example trains a 16-unit LSTM RNN to solve this task (Fig. 1). Once the RNN is trained, the example uses FixedPointFinder to identify and characterize the trained RNN's fixed points. Finally, the example produces a visualization of these results (Fig. 2). In addition to demonstrating a working use of FixedPointFinder, this example provides a testbed for experimenting with different RNN architectures (e.g., numbers of recurrent units, LSTMs vs. GRUs vs. vanilla RNNs) and characterizing how these lower-level model design choices manifest in the higher-level dynamical implementation used to solve a task.

Figure 1. Inputs (gray), target outputs (cyan), and outputs of a trained LSTM RNN (purple) from an example trial of the flip-flop task. Signed input pulses (gray) flip the corresponding bit's state (green) whenever an input pulse has the opposite sign of the current bit state (e.g., if gray goes high when green is low). The RNN has been trained to nearly perfectly reproduce the target memory state (purple closely overlaps cyan).

Figure 2. Fixed-point structure of an LSTM RNN trained to solve the flip-flop task. FixedPointFinder identified 8 stable fixed points (black points), each of which corresponds to a unique state of the 3-bit memory. FixedPointFinder also identified a number of unstable fixed points (red points) along with their unstable modes (red lines), which mediate the set of state transitions trained into the RNN's dynamics. Here, each unstable fixed point is a "saddle" in the RNN's dynamical flow field, and the corresponding unstable modes indicate the directions that nearby states are repelled from the fixed point. State trajectories from example trials (blue) traverse about these fixed points. All quantities are visualized in the 3-dimensional space determined by the top 3 principal components computed across 128 example trials.
General Usage (PyTorch & TensorFlow)
Start by building, and if desired, training an RNN.
FixedPointFinderworks with Pytorch RNN objects (e.g.,torch.nn.RNN,torch.nn.GRU) and TensorflowRNNCellobjects.Advanced: More generally,
FixedPointFinderwill work on any Pytorch or TensorFlow functionfthat satisfies the following:
- `f` must be auto-differentiatiable.
- `f` must map inputs and previous states to updated states.
- `f` must match the argument specifications: `_, h_next = f(input, h_prev)`
`input`: a tensor with shape `(n, n_inputs)` containing `n` inputs of dimension `n_inputs`.
`h_prev`: a tensor with shape `(n, n_states)` containing `n` previous states of dimension `n_states`.
`h_next`: a tensor with shape `(n, n_states)` containing the `n` updated states.
Internally, `f` should map `inputs[i]` and `h_prev[i]` to `h_next[i]`.
- Build a
FixedPointFinderobject:
- PyTorch: `fpf = FixedPointFinder(your_rnn, **kwargs)`
- Tensorflow: `fpf = FixedPointFinder(your_rnn_cell, tf_session, **kwargs)`
- Here, `your_rnn_cell` is the `RNNCell` that specifies the single-timestep transitions in your RNN, and `tf_session` is the Tensorflow session in which your model has been instantiated.
Specify the
initial_statesfrom which you'd like to initialize the local optimizations implemented byFixedPointFinder. These data should conform to shape and type expected byyour_rnn_cell. For Tensorflow'sBasicRNNCell, this would mean an(n, n_states)numpy array, wherenis the number of initializations andn_statesis the dimensionality of the RNN state (i.e., the number of hidden units). For Tensorflow'sLSTMCell,initial_statesshould be anLSTMStateTuplecontaining one(n, nstates)numpy array specifying the initializations of the hidden states and another(n, nstates)numpy array specifying the cell states.Specify the
inputsunder which you'd like to study your RNN. Currently, To study the RNN given a set of static inputs,inputsshould be a numpy array with shape(1, n_inputs)wheren_inputsis an int specifying the depth of the inputs expected byyour_rnn_cell. Alternatively, you can search for fixed points under different inputs by specifying a potentially different input for each initial states by makinginputsa(n, n_inputs)numpy array.Run the local optimizations that find the fixed points: ```python
fps = fpf.findfixedpoints(initial_states, inputs) ``
The fixed points identified, the Jacobian of your RNN state transition function at those points, and some metadata corresponding to the optimizations will be returned in theFixedPointsobject.fps` (see FixedPoints.py for more detail).Finally, visualize the identified fixed points: ```python
fps.plot() ``
You can also visualize these fixed points amongst state trajectories from your RNN (seeplot` in FixedPoints.py and the example in runFlipFloptorch.py and runFlipFloptf.py)
Testing the Package
Tests are not currently functional due to package upgrades in 2022-2023. That said, the rest of the codebase should be fully usable, including the 3-bit flip flop examples. Stay tuned.
Earlier versions of FixedPointFinder included a test suite for confirming successful installation, and for ensuring that contributions have not introduced bugs into the main control flow. The tests run FixedPointFinder over a set of RNNs where ground truth fixed points have been previously identified, numerically confirmed, and saved for comparison.
To run the tests, descend into the test directory: fixed-point-finder/test/ and execute:
```bash
python run_test.py ```
Contribution Guidelines
Contributions are welcome. Please see the contribution guidelines.
Owner
- Name: Matt Golub
- Login: mattgolub
- Kind: user
- Location: Seattle
- Company: University of Washington
- Website: homes.cs.washington.edu/~mgolub/
- Twitter: MattGolub_Neuro
- Repositories: 1
- Profile: https://github.com/mattgolub
Assistant Professor, Computer Science and Engineering
JOSS Publication
FixedPointFinder: A Tensorflow toolbox for identifying and characterizing fixed points in recurrent neural networks
Authors
Tags
recurrent neural networks fixed point optimization nonlinear dynamical systems TensorflowGitHub Events
Total
- Issues event: 2
- Watch event: 14
- Pull request event: 1
- Fork event: 5
Last Year
- Issues event: 2
- Watch event: 14
- Pull request event: 1
- Fork event: 5
Committers
Last synced: 5 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Matt-Linux-Box | m****b@s****u | 234 |
| Alexander Ladd | x****r@A****n | 10 |
| Arfon Smith | a****n | 1 |
| Work | w****k@M****l | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 4 months ago
All Time
- Total issues: 8
- Total pull requests: 36
- Average time to close issues: 11 months
- Average time to close pull requests: 4 months
- Total issue authors: 6
- Total pull request authors: 6
- Average comments per issue: 0.5
- Average comments per pull request: 0.83
- Merged pull requests: 3
- Bot issues: 0
- Bot pull requests: 30
Past Year
- Issues: 2
- Pull requests: 1
- Average time to close issues: N/A
- Average time to close pull requests: 5 minutes
- Issue authors: 1
- Pull request authors: 1
- Average comments per issue: 0.0
- Average comments per pull request: 0.0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- derek1909 (2)
- malmaud (2)
- eveJiang (1)
- generush (1)
- cversteeg (1)
- IanQS (1)
Pull Request Authors
- dependabot[bot] (31)
- shauryagoyall (2)
- IanQS (2)
- xanderladd (2)
- arfon (1)
- mattgolub (1)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- matplotlib ==3.7.1
- numpy ==1.24.3
- scikit-learn ==1.2.2
- tensorflow ==2.8.0
