bayesian-recurrence
A Bayesian Interpretation of Recurrence in Neural Networks
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (7.5%) to scientific vocabulary
Repository
A Bayesian Interpretation of Recurrence in Neural Networks
Basic Info
- Host: GitHub
- Owner: idiap
- Language: Python
- Default Branch: main
- Size: 13.7 KB
Statistics
- Stars: 6
- Watchers: 3
- Forks: 0
- Open Issues: 0
- Releases: 1
Metadata Files
README.md
A Bayesian Interpretation of Recurrence in Neural Networks
This repository contains the different Bayesian recurrent units (BRUs) implemented in PyTorch, that were defined in the following papers by A. Bittar and P. Garner, - A Bayesian Interpretation of the Light Gated Recurrent Unit, ICASSP 2021 - Bayesian Recurrent Units and the Forward-Backward Algorithm, INTERSPEECH 2022.
Contact: abittar@idiap.ch
Installation
git clone https://github.com/idiap/bayesian-recurrence.git
cd bayesian-recurrence
pip install -r requirements.txt
python setup.py install
Usage
After the installation, the defined recurrent units are available as python modules. One can then create networks of the desired Bayesian units and use them inside PyTorch.
import torch
import torch.nn as nn
from bayesian_recurrence.libru import liBRU
# Build input
batch_size = 4
nb_steps = 100
nb_inputs = 20
x = torch.Tensor(batch_size, nb_steps, nb_inputs)
nn.init.uniform_(x)
# Define network
net = liBRU(
nb_inputs,
layer_sizes=[128, 128, 10],
bidirectional=True,
hidden_type='probs',
normalization='batchnorm',
use_bias=False,
dropout=0.
)
# Pass input tensor through network
y = net(x)
Owner
- Name: Idiap Research Institute
- Login: idiap
- Kind: organization
- Location: Centre du Parc, Martigny, Switzerland
- Website: http://www.idiap.ch
- Repositories: 73
- Profile: https://github.com/idiap
Citation (CITATION.cff)
cff-version: 1.1.0 message: "If you use this software, please cite it as below." authors: - family-names: Bittar given-names: Alexandre - family-names: Garner given-names: Philip title: A Bayesian Interpretation of Recurrence in Neural Networks doi: 10.5281/zenodo.10016897 version: v1.0.0 date-released: 2023-10-16
GitHub Events
Total
- Watch event: 1
- Member event: 1
Last Year
- Watch event: 1
- Member event: 1
Issues and Pull Requests
Last synced: about 1 year ago
All Time
- Total issues: 0
- Total pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 0
- Total pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- numpy ==1.23.0
- torch ==1.11.0
- typing_extensions ==4.2.0