torch_blue: A Flexible Python Package for Bayesian Neural Networks in PyTorch

torch_blue: A Flexible Python Package for Bayesian Neural Networks in PyTorch - Published in JOSS (2026)

https://github.com/RAI-SCC/torch_blue

Science Score: 87.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in JOSS metadata
  • Academic publication links
    Links to: joss.theoj.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
    Published in Journal of Open Source Software
Last synced: about 1 month ago · JSON representation

Repository

Basic Info
  • Host: GitHub
  • Owner: RAI-SCC
  • License: bsd-3-clause
  • Language: Python
  • Default Branch: main
  • Size: 3.14 MB
Statistics
  • Stars: 12
  • Watchers: 1
  • Forks: 1
  • Open Issues: 13
  • Releases: 1
Created over 1 year ago · Last pushed about 1 month ago
Metadata Files
Readme Contributing License

README.md

torch_blue - A PyTorch-like library for Bayesian learning and uncertainty estimation

status status status status status


torch_blue provides a simple way for non-expert users to implement and train Bayesian Neural Networks (BNNs). Currently, it only supports Variational Inference (VI), but will hopefully grow and expand in the future. To make the user experience as easy as possible most components mirror components from PyTorch.

Installation

We heavily recommend installing torch_blue in a dedicated Python3.9+ virtual environment. You can install torch_blue from PyPI:

console $ pip install torch-blue

Alternatively, you can install torch_blue locally. To achieve this, there are two steps you need to follow:

  1. Clone the repository

console $ git clone https://github.com/RAI-SCC/torch_blue

  1. Install the code locally

console $ pip install -e .

To get the development dependencies, run:

console $ pip install -e .[dev]

For additional dependencies required if you want to run scripts from the scripts directory, run:

console $ pip install -e .[scripts]

Documentation

Documentation is available online at readthedocs.

Quickstart

This Quickstart guide assumes basic familiarity with PyTorch and knowledge of how to implement the intended model in it. For a (potentially familiar) example see scripts/mnist_tutorial (as jupyter notebook with comments, or pure python script), which contains a copy of the PyTorch Quickstart tutorial modified to train a BNN with variational inference.

Three levels are introduced in this guide: - Level 1: Simple sequential layer stacks - Level 2: Customizing Bayesian assumptions and VI kwargs - Level 3: Non-sequential models and log probabilities - Level 4: Custom modules with weights

Level 1

Many parts of a neural network remain completely unchanged when turning it into a BNN. Indeed, only Modules containing nn.Parameters, need to be changed. Therefore, if a PyTorch model fulfills two requirements it can be transferred almost unchanged:

  1. All PyTorch Modules containing parameters have equivalents in this package (table below).
  2. The model can be expressed purely as a sequential application of a list of layers, i.e. with nn.Sequential.

| PyTorch | vi replacement | |------------------|-----------------| | nn.Linear | VILinear | | nn.Conv1d | VIConv1d | | nn.Conv2d | VIConv2d | | nn.Conv3d | VIConv3d | | nn.Transformer | VITransformer |

Given these two conditions, inherit the module from vi.VIModule instead of nn.Module and use vi.VISequential instead of nn.Sequential. Then replace all layers containing parameters as shown in the table above. For basic usage initialize these modules with the same arguments as their PyTorch equivalent. For advanced usage see Quickstart: Level 2. Many other layers can be included as-is. In particular activation functions, pooling, and padding (even dropout, though they should not be necessary since the prior acts as regularization). Currently not supported are recurrent and transposed convolution layers. Normalization layers may have parameters depending on their setting, but can likely be left non-Bayesian.

Additionally, the loss must be replaced. To start out use vi.KullbackLeiblerLoss, which requires a Distribution with self.is_predictive_distribution=True and the size of the training dataset (this is important for balancing of assumptions and data. Choose your Distribution from the table below based on the loss you would use in PyTorch.

[!IMPORTANT] KullbackLeiblerLoss requires the length of the dataset, not the dataloader, which is just the number of batches.

| PyTorch | vi replacement
from vi.distributions | |-----------------------|----------------------------------------------| | nn.MSELoss | MeanFieldNormal | | nn.CrossEntropyLoss | Categorical |

[!NOTE] Reasons for the requirement to use VISequential (and how to overcome it) are described in Quickstart: Level 3. However, adding residual connections from the start to the end of a block of layers can also be achieved using VIResidualConnection, which acts the same as VISequential, but adds the input to the output.

Level 2

While the interface of VIModules is kept intentionally similar to PyTorch, there are additional arguments that customize the Bayesian assumptions that all provided layers accept and custom modules should generally accept and pass on to submodules: - variationaldistribution (Distribution): defines the weight distribution and variational parameters. The default MeanFieldNormal assumes normal distributed, uncorrelated weights described by a mean and a standard deviation. While there are currently no alternatives the initial value of the standard deviation can be customized here. - prior (Distribution): defines the assumptions on the weight distribution and acts as regularizer. The default MeanFieldNormal assumes normal distributed, uncorrelated weights with mean 0 and standard deviation 1 (also known as a standard normal prior). Mean and standard deviation can be adapted here. Particularly reducing the standard deviation may help convergence at the risk of an overconfident model. Other available priors: - BasicQuietPrior: an experimental prior that correlates mean and standard deviation to disincentivize noisy weights - rescaleprior (bool): Experimental. Scales the prior similar to Kaiming-initialization. May help with convergence, but may lead to overconfidence. Current research. - priorinitialization (bool): Experimental. Initialize parameters from the prior instead of according to standard non-Bayesian methods. May lead to much faster convergence, but can cause the issues Kaiming-initialization counteracts unless rescaleprior is also set to True. Current research. - returnlogprobs (bool): This is the topic of Quickstart: Level 3.

Level 3

For more advanced models one feature of Variational Inference (VI) needs to be taken into account. Generally, a loss for VI will require the log probability of the actually used weights (which are sampled on each forward pass) in the variational and prior distribution. Since it is quite inefficient to save the samples these log probabilities are evaluated during the forward pass and returned by the model. Since this is only necessary for training it can be controlled with the argument return_log_probs. Once the model is initialized this flag can be changed by setting VIModule.return_log_probs, which either enables (True) or disables (False) the returning of the log probabilities for all submodules.

While torch_blue calculates and aggregates log probs internally, this is handled by the outermost VIModule. This module will not have the expected output signature when returning log probs, but instead return a VIReturn object. This class is PyTorch Tensor that also contains log prob information in its additional log_probs attribute. This is the format torch_blue losses expect. Therefore, if you feed the output directly into a loss there should be no issues. While all PyTorch tensor operations can be performed on VIReturns many will delete the log prob information and transform the object back into a Tensor. This needs to be considered when performing further operations on the model output. The simplest way to avoid issues is to wrap all operations - except the loss - in a VIModule since log prob aggregation is only performed by the outermost module. For deployment return_log_probs should be set to False. If multiple Tensors are returned by the model, each will carry all log probs.

[!NOTE] Always make sure your outermost module is a VIModule and keep in mind that the output of that module will be a VIReturn object, which behaves like a Tensor, carries weight log probabilities, if return_log_probs == True. Losses in torch_blue expect this format.

[!NOTE] Due to Autosampling all output Tensors, i.e. each VIReturn in the model output and the Tensor containing the log probs has an additional dimension at the beginning representing the multiple samples necessary to properly evaluate the stochastic forward pass. This is only relevant for VIModules that are not contained within other VIModules. Loss functions are designed to expect and handle this output format, i.e. you can simply feed the model output into the loss and everything will work.

Level 4

Creating VIModules with Bayesian weights - which are typically called random variables in documentation and code - is arguably simpler than in PyTorch. Since a different number of weight matrices needs to be created based on the variational distribution, the process is completely automated. For VIModules without weights super().__init__ is called without arguments. Modules with random variables expect VIkwargs (which you should be familiar with from Level 2), but defaults are used if non are passed. More importantly, VIModules with weights call super().__init__ with the argument variable_shapes. The keys of this dictionary are the names of the random variables and the values the shapes of the weight matrices as tuple or list. The value may also be set to None, which will always be the value returned for that variable.

The insertion order of this dictionary matters, as it becomes the order of the names in the module attribute random_variables. random_variables, the shapes, and a similar attribute of the variational distribution call distribution_parameters are used to dynamically create the weight matrices. The weight matrices can be accesses as attributes of the module, which will cause a sample to be drawn and its log prob to be stored if needed.

Should you need to access the weight tensors directly you can use getattr and derive the name using the method variational_parameter_name.

[!IMPORTANT] Every access of the weights will yield a new sample and log probability to be stored. Aggregation of multiple log probs is handled internally, but unnecessary calls will distort the result.

Owner

  • Name: RAI-SCC
  • Login: RAI-SCC
  • Kind: organization

JOSS Publication

torch_blue: A Flexible Python Package for Bayesian Neural Networks in PyTorch
Published
January 23, 2026
Volume 11, Issue 117, Page 9415
Authors
Arvid Weyrauch ORCID
Karlsruhe Institute of Technology, Germany
Lars H. Heyen ORCID
Karlsruhe Institute of Technology, Germany
Juan Pedro Gutiérrez Hermosillo Muriedas ORCID
Karlsruhe Institute of Technology, Germany
Pei-Hsuan Hsia ORCID
Karlsruhe Institute of Technology, Germany
Asena Karolin Özdemir ORCID
Karlsruhe Institute of Technology, Germany
Achim Streit ORCID
Karlsruhe Institute of Technology, Germany
Markus Götz ORCID
Karlsruhe Institute of Technology, Germany, Helmholtz AI
Charlotte Debus ORCID
Karlsruhe Institute of Technology, Germany
Editor
Tristan Miller ORCID
Tags
Bayesian Neural Networks Variational Inference

GitHub Events

Total
  • Delete event: 3
  • Pull request event: 3
  • Issues event: 3
  • Watch event: 3
  • Issue comment event: 1
  • Push event: 16
  • Pull request review event: 5
  • Pull request review comment event: 3
  • Create event: 8
Last Year
  • Delete event: 3
  • Pull request event: 3
  • Issues event: 3
  • Watch event: 3
  • Issue comment event: 1
  • Push event: 16
  • Pull request review event: 5
  • Pull request review comment event: 3
  • Create event: 8

Issues and Pull Requests

Last synced: about 1 month ago

All Time
  • Total issues: 1
  • Total pull requests: 1
  • Average time to close issues: N/A
  • Average time to close pull requests: 3 months
  • Total issue authors: 1
  • Total pull request authors: 1
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 1
  • Average time to close issues: N/A
  • Average time to close pull requests: 3 months
  • Issue authors: 1
  • Pull request authors: 1
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • ArvidWeyrauch (1)
Pull Request Authors
  • ArvidWeyrauch (1)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

pyproject.toml pypi
  • torch *
.github/workflows/release.yaml actions
  • actions/checkout 11bd71901bbe5b1630ceea73d27597364c9af683 composite
  • actions/setup-python 7f4fc3e22c37d6ff65e88745f38bd3157c663f7c composite
  • pypa/gh-action-pypi-publish 76f52bc884231f62b9a034ebfe128415bbaabdfc composite
  • step-security/harden-runner 002fdce3c6a235733a90a27c80493a3241e56863 composite
.github/workflows/run_tests.yaml actions
  • actions/checkout 11bd71901bbe5b1630ceea73d27597364c9af683 composite
  • actions/setup-python a26af69be951a213d495a4c3e4e4022e16d87065 composite
  • step-security/harden-runner 002fdce3c6a235733a90a27c80493a3241e56863 composite