Additive Bayesian Networks

Additive Bayesian Networks - Published in JOSS (2024)

https://github.com/furrer-lab/abn

Science Score: 98.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 42 DOI reference(s) in README and JOSS metadata
  • Academic publication links
    Links to: arxiv.org, springer.com, wiley.com, joss.theoj.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
    Published in Journal of Open Source Software

Keywords

bayesian-network binomial categorical-data gaussian grouped-datasets mixed-effects multinomial multivariate poisson structure-learning

Keywords from Contributors

standardization

Scientific Fields

Engineering Computer Science - 45% confidence
Physics Physical Sciences - 40% confidence
Last synced: 4 months ago · JSON representation ·

Repository

Bayesian network analysis in R

Basic Info
Statistics
  • Stars: 6
  • Watchers: 2
  • Forks: 0
  • Open Issues: 37
  • Releases: 6
Topics
bayesian-network binomial categorical-data gaussian grouped-datasets mixed-effects multinomial multivariate poisson structure-learning
Created about 2 years ago · Last pushed 4 months ago
Metadata Files
Readme Changelog Contributing License Code of conduct Citation

README.md

abn: Additive Bayesian Networks

status On Label CRAN Checks Codecov GitHub R package version cran downloads LICENCE <!-- badges: end -->

The R package abn is a tool for Bayesian network analysis, a form of probabilistic graphical model. It derives a directed acyclic graph (DAG) from empirical data that describes the dependency structure between random variables. The package provides routines for structure learning and parameter estimation of additive Bayesian network models.

Installation

Ubuntu Install Fedora Install MacOS Install Windows Install

abn and its installation process relies on various software that might, or might not, be present in your system.

Prior to installing

In order for abn to work correctly on your system some dependencies need to be installed. If you are on a Linux based system (most of) these dependencies are installed automatically for you when following the pak-based installation procedure described in the Installing from GitHub section.

For MacOS and Windows based system some more preparatory steps are required.

The following paragraphs provide detailed instructions for the most common operating systems on the steps that need to be carried out prior to installing abn.

Ubuntu You presumably have R installed already, if not, open a terminal and type: ```bash apt-get install r-base ``` _**Note:** You might need to prepend `sudo ` to this command._ All you need for the installation is to have the R-package [pak](https://pak.r-lib.org/) installed. `pak` is installed like any other R-package, however, it relies on `curl` being present on your system, so we make sure it is there: ```bash apt-get install libcurl4-openssl-dev ``` Now, to install `pak` we start an R session and write: ```R install.packages('pak', repos=c(CRAN="https://cran.r-project.org")) ``` With that you should be ready to [install `abn` from GitHub](#installing-from-github-recommended).
Fedora You presumably have R installed already, if not, open a terminal and type: ```bash dnf install R ``` _**Note:** You might need to prepend `sudo ` to this command._ For the installation you need to have the R-package [pak](https://pak.r-lib.org/) installed. `pak` is installed like any other R-package, however, it relies on `curl` being installed on your system, so we make sure it is there: ```bash dnf install libcurl-devel ``` Now, to install `pak` we start an R session and write: ```R install.packages('pak', repos=c(CRAN="https://cran.r-project.org")) ``` There is one more thing we need to do before we can install `abn`: **Install JAGS from source** [JAGS](https://mcmc-jags.sourceforge.io/), _Just Another Gibbs Sampler_, is a program for analyzing Bayesian hierarchical models using Markov Chain Monte Carlo (MCMC) simulation. [rjags](https://cran.r-project.org/package=rjags) is R's interface to the `JAGS` library. `JAGS` is required in some simulations `abn` can perform. The steps needed to install `JAGS 4.3.2` are: ```bash wget -O /tmp/jags.tar.gz https://sourceforge.net/projects/mcmc-jags/files/JAGS/4.x/Source/JAGS-4.3.2.tar.gz/download cd /tmp tar -xf jags.tar.gz cd /tmp/JAGS-4.3.2 ./configure --libdir=/usr/local/lib64 make sudo make install ``` _**Note:**_ _If you are on a 64bit system (you likely are) mind the `--libdir=/usr/local/lib64` argument when launching `./configure`.)_ _Omitting this argument will lead to `rjags` "not seeing" `jags`._ On Fedora `rjags` might need some special configuration for it to link properly to the `JAGS` library. Also, it might be needed to add the path to the `JAGS` library to the linker path (see [rjags INSTALL file](https://github.com/cran/rjags/blob/master/INSTALL) for further details). In order to add the `JAGS` library to the linker path, run the following commands: ```bash sudo echo "/usr/local/lib64" > /etc/ld.so.conf.d/jags.conf sudo /sbin/ldconfig ``` _**Note:**_ _These commands might not be needed, you might first try to install the R-package `rjags` and only run them if you encounter a `configure: error: Runtime link error`._ With that you should be ready to [install `abn` from GitHub](#installing-from-github-recommended).
MacOS Most likely you have R installed already but if not run: ```bash brew install R ``` For the installation you need to have the R-package [pak](https://pak.r-lib.org/) installed. `pak` is installed like any other R-package, we start an R session and write: ```R install.packages('pak', repos=c(CRAN="https://cran.r-project.org")) ``` We will install the system dependencies with [Homebrew](https://brew.sh/). Head over to their site to see the installation process or simply open a terminal and run: ```bash /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" ``` To correctly link to installed libraries and to build them, we need `pkg-config` and `automake`: ```bash brew install pkg-config brew install automake # needed to run autoconf ``` We will use `wget` to download `JAGS` later, as well as, the development headers `openssl`: ```bash brew install wget brew install openssl@1.1 ``` **Dependencies** On MacOS we need to install some system dependencies separately: - **GSL** [GSL](https://www.gnu.org/software/gsl/), the _GNU Scientific Library_, is a numerical library for C/C++. It is required to compile `abn`'s C/C++ code. With Homebrew you can install the `GSL` binaries directly: ``` brew install gsl ``` - **JAGS & rjags** [JAGS](https://mcmc-jags.sourceforge.io/), _Just Another Gibbs Sampler_, is a program for analyzing Bayesian hierarchical models using Markov Chain Monte Carlo (MCMC) simulation. [rjags](https://cran.r-project.org/package=rjags) is R's interface to the `JAGS` library. `JAGS` is required in some simulations `abn` can perform. With Homebrew you can install the `JAGS` binaries directly: ``` brew install jags ``` And now to install `rjags`, open an R session and type: ```R install.packages("rjags", type="source", repos=c(CRAN="https://cran.r-project.org")) library("rjags") ``` - **INLA** [INLA](https://www.r-inla.org/) is an R package that is not hosted on CRAN and thus needs to be installed separately. `abn` uses `INLA` to fit some models. `INLA` relies on various other R packages and C/C++ libraries. It thus needs some additional installation steps: ```bash brew install udunits brew install gdal # installs also geos as dependency brew install proj ``` Now, to install `INLA` itself, simply start an R session and run: ```R install.packages("INLA", repos = c(getOption("repos"), INLA = "https://inla.r-inla-download.org/R/stable"), dep = TRUE) ``` If you run into trouble, please see also [INLA's installation instructions](https://www.r-inla.org/download-install) for further details.
Windows For the installation you need to have the R-package [pak](https://pak.r-lib.org/) installed. `pak` is installed like any other R-package, we start an R session and write: ```R install.packages('pak', repos=c(CRAN="https://cran.r-project.org")) ``` **Dependencies** On Windows we need to install some system dependencies separately: - **GSL** [GSL](https://www.gnu.org/software/gsl/), the _GNU Scientific Library_, is a numerical library for C/C++. It is required to compile `abn`'s C/C++ code. In Windows `GSL` is available a.o. through [cygwin](https://cygwin.com/index.html), which has a straight forward installation process. Either head over to the website, download and install the `setup-x86_64.exe` file or use PowerShell: ```powershell Import-Module bitstransfer New-Item -ItemType Directory -Force -Path "C:\Program Files\cygwin" start-bitstransfer -source https://cygwin.com/setup-x86_64.exe "C:\Program Files\cygwin\setup-x86_64.exe" Start-Process -Wait -FilePath "C:\Program Files\cygwin\setup-x86_64.exe" -ArgumentList "/S" -PassThru ``` - **JAGS & rjags** [JAGS](https://mcmc-jags.sourceforge.io/), _Just Another Gibbs Sampler_, is a program for analyzing Bayesian hierarchical models using Markov Chain Monte Carlo (MCMC) simulation. [rjags](https://cran.r-project.org/package=rjags) is R's interface to the `JAGS` library. `JAGS` is required in some simulations `abn` can perform. You can either head over to the [JAGS download page](https://sourceforge.net/projects/mcmc-jags/files/JAGS/4.x/Windows/), download and execute the installable, or use PowerShell. The following instructions will download and install `JAGS 4.3.1` in PowerShell: ```powershell Import-Module bitstransfer New-Item -ItemType Directory -Force -Path "C:\Program Files\JAGS\JAGS-4.3.1" start-bitstransfer -source https://sourceforge.net/projects/mcmc-jags/files/JAGS/4.x/Windows/JAGS-4.3.1.exe/download "C:\Program Files\JAGS\JAGS-4.3.1\JAGS-4.3.1.exe" Start-Process -Wait -FilePath "C:\Program Files\JAGS\JAGS-4.3.1\JAGS-4.3.1.exe" -ArgumentList "/S" -PassThru ``` In order to make sure `rjags` finds `JAGS` we set the environment variable `JAGS_HOME` before installing `rjags`. To do so, open your R session and type: ```R Sys.setenv(JAGS_HOME="C:/Program Files/JAGS/JAGS-4.3.1") install.packages("rjags", repos=c(CRAN="https://cran.r-project.org")) library("rjags") ``` - **INLA** [INLA](https://www.r-inla.org/) is an R package that is not hosted on CRAN and thus needs to be installed separately. `abn` uses `INLA` to fit some models. The installation is straight forward, simply start an R session and run: ```R install.packages("INLA", repos = c(getOption("repos"), INLA = "https://inla.r-inla-download.org/R/stable"), dep = TRUE) ``` If you run into trouble, please see also [INLA's installation instructions](https://www.r-inla.org/download-install) for further details.

Click on your operating system to see the specific installation instructions

R version support

Officially supported is R version >= 4.4

Installing from GitHub (recommended)

From GitHub you can install any version and/or state of the abn repository you want. We recommend to not directly install main, but to a specific version. Head over to our version list to see which one is the latest version. Here we assume the version is 3.1.2.

We use pak for the installation process. If you followed the Prior to installing section pak should already be installed.

If not, install it first. Open an R session and type:

R install.packages('pak', repos=c(CRAN="https://cran.r-project.org"))

To install abn run in your R session:

R pak::repo_add(INLA = "https://inla.r-inla-download.org/R/stable/") pak::pkg_install("furrer-lab/abn@3.1.2", dependencies=TRUE) Note: The first command can be skipped on MacOS or Windows.

Installing from CRAN

[!NOTE] When installing from CRAN you might not get the latest version of abn. If you want the latest version follow the instructions from Installing from GitHub.

In order to install the abn version on CRAN, open an R session and type:

R pak::repo_add(INLA = "https://inla.r-inla-download.org/R/stable/") pak::pkg_install("abn", dependencies=TRUE) Note: The first command can be skipped on MacOS or Windows.

abn has several dependencies that are not available on CRAN. This is why we rely on pak for the installation and the Prior to installing section should be followed through before installing abn from CRAN. [^1]

[^1]: The abn package includes certain features, such as multiprocessing and integration with the INLA package, which are limited or available only on specific CRAN flavors. While it is possible to relax the testing process by, e.g., excluding tests of these functionalities, we believe that rigorous testing is important for reliable software development, especially for a package like abn that includes complex functionalities. We have implemented a rigorous testing framework similar to CRAN's to validate these functionalities in our development process. Our aim is to maximize the reliability of the abn package under various conditions, and we are dedicated to providing a robust and reliable package. We appreciate your understanding as we work towards making abn available on CRAN soon.

Installing from source

It is also possible to clone this repository and install abn from source.

[!NOTE] Also in this case you need to first prepare your system by following the Prior to installing section.

Installing from source is done with the following steps:

  1. Clone the repository and go to the root directory of the repo:

bash git clone https://github.com/furrer-lab/abn cd abn

  1. Deactivate abn's development environment (a renv virtual environment):

R renv::deactivate()

  1. Build and install the local content with dependencies:

R pak::repo_add(INLA = "https://inla.r-inla-download.org/R/stable/") pak::local_install(dependencies=TRUE) Note: The first command can be skipped on MacOS or Windows.

Quickstart

Explore the basics of data analysis using additive Bayesian networks with the abn package through our simple example. The datasets required for these examples are included within the abn package.

For a deeper understanding, refer to the manual pages on the abn homepage, which include numerous examples. Key pages to visit are fitAbn(), buildScoreCache(), mostProbable(), and searchHillClimber(). Also, see the examples below for a quick overview of the package's capabilities.

Features

The R package abn provides routines for determining optimal additive Bayesian network models for a given data set. The core functionality is concerned with model selection - determining the most likely model of data from interdependent variables. The model selection process can incorporate expert knowledge by specifying structural constraints, such as which arcs are banned or retained.

The general workflow with abn follows a three-step process:

  1. Determine the model search space: The function buildScoreCache() builds a cache of pre-computed scores for each possible DAG. For this, it's required to specify the data types of the variables in the data set and the structural constraints of the model (e.g. which arcs are banned or retained and the maximum number of parents per node).

  2. Structure learning: abn offers different structure learning algorithms:

    • The exact structure learning algorithm from Koivisto and Sood (2004) is implemented in C and can be called with the function mostProbable(), which finds the most probable DAG for a given data set. The function searchHeuristic() provides a set of heuristic search algorithms. These include the hill-climber, tabu search, and simulated annealing algorithms implemented in R. searchHillClimber() searches for high-scoring DAGs using a random re-start greedy hill-climber heuristic search and is implemented in C. It slightly deviates from the method initially presented by Heckerman et al. 1995 (for details consult the respective help page ?abn::searchHillClimber()).
  3. Parameter estimation: The function fitAbn() estimates the model's parameters based on the DAG from the previous step.

abn allows for two different model formulations, specified with the argument method:

  • method = "mle" fits a model under the frequentist paradigm using information-theoretic criteria to select the best model.

  • method = "bayes" estimates the posterior distribution of the model parameters based on two Laplace approximation methods, that is, a method for Bayesian inference and an alternative to Markov Chain Monte Carlo (MCMC): A standard Laplace approximation is implemented in the abn source code but switches in specific cases (see help page ?fitAbn) to the Integrated Nested Laplace Approximation from the INLA package requiring the installation thereof.

To generate new observations from a fitted ABN model, the function simulateAbn() simulates data based on the DAG and the estimated parameters from the previous step. simulateAbn() is available for both method = "mle" and method = "bayes" and requires the installation of the JAGS package.

Supported Data types

The abn package supports the following distributions for the variables in the network:

  • Gaussian distribution for continuous variables.

  • Binomial distribution for binary variables.

  • Poisson distribution for variables with count data.

  • Multinomial distribution for categorical variables (only available with method = "mle").

Unlike other packages, abn does not restrict the combination of parent-child distributions.

Multilevel Models for Grouped Data Structures

The analysis of "hierarchical" or "grouped" data, in which observations are nested within higher-level units, requires statistical models with parameters that vary across groups (e.g. mixed-effect models).

abn allows to control for one-layer clustering, where observations are grouped into a single layer of clusters that are themself assumed to be independent, but observations within the clusters may be correlated (e.g. students nested within schools, measurements over time for each patient, etc). The argument group.var specifies the discrete variable that defines the group structure. The model is then fitted separately for each group, and the results are combined.

For example, studying student test scores across different schools, a varying intercept model would allow for the possibility that average test scores (the intercept) might be higher in one school than another due to factors specific to each school. This can be modeled in abn by setting the argument group.var to the variable containing the school names. The model is then fitted as a varying intercept model, where the intercept is allowed to vary across schools, but the slope is assumed to be the same for all schools.

Under the frequentist paradigm (method = "mle"), abn relies on the lme4 package to fit generalized linear mixed models (GLMMs) for Binomial, Poisson, and Gaussian distributed variables. For multinomial distributed variables, abn fits a multinomial baseline category logit model with random effects using the mclogit package. Currently, only one-layer clustering is supported (e.g., for method = "mle", this corresponds to a random intercept model).

With a Bayesian approach (method = "bayes"), abn relies on its own implementation of the Laplace approximation and the package INLA to fit a single-level hierarchical model for Binomial, Poisson, and Gaussian distributed variables. Multinomial distributed variables in general (see Section Supported Data Types) are not yet implemented with method = "bayes".

Basic Background

Bayesian network modeling is a data analysis technique ideally suited to messy, highly correlated and complex datasets. This methodology is rather distinct from other forms of statistical modeling in that its focus is on structure discovery—determining an optimal graphical model that describes the interrelationships in the underlying processes that generated the data. It is a multivariate technique and can be used for one or many dependent variables. This is a data-driven approach, as opposed to relying only on subjective expert opinion to determine how variables of interest are interrelated (for example, structural equation modeling).

Below and on the package's website, we provide some cookbook-type examples of how to perform Bayesian network structure discovery analyses with observational data. The particular type of Bayesian network models considered here are additive Bayesian networks. These are rather different, mathematically speaking, from the standard form of Bayesian network models (for binary or categorical data) presented in the academic literature, which typically use an analytically elegant but arguably interpretation-wise opaque contingency table parametrization. An additive Bayesian network model is simply a multidimensional regression model, e.g., directly analogous to generalized linear modeling but with all variables potentially dependent.

An example can be found in the American Journal of Epidemiology, where this approach was used to investigate risk factors for child diarrhea. A special issue of Preventive Veterinary Medicine on graphical modeling features several articles that use abn to fit epidemiological data (e.g., Ludwig et al., 2013). Introductions to this methodology can be found in Emerging Themes in Epidemiology and in Computers in Biology and Medicine where it is compared to other approaches.

What is an additive Bayesian network?

Additive Bayesian network (ABN) models are statistical models that use the principles of Bayesian statistics and graph theory. They provide a framework for representing data with multiple variables, known as multivariate data.

ABN models are a graphical representation of (Bayesian) multivariate regression. This form of statistical analysis enables the prediction of multiple outcomes from a given set of predictors while simultaneously accounting for the relationships between these outcomes.

In other words, additive Bayesian network models extend the concept of generalized linear models (GLMs), which are typically used to predict a single outcome, to scenarios with multiple dependent variables. This makes them a powerful tool for understanding complex, multivariate datasets.

The term Bayesian network is interpreted differently across various fields.

Bayesian network models often involve binary nodes, arguably the most frequently used type of Bayesian network. These models typically use a contingency table instead of an additive parameter formulation. This approach allows for mathematical elegance and enables key metrics like model goodness of fit and marginal posterior parameters to be estimated analytically (i.e., from a formula) rather than numerically (an approximation). However, this parametrization may not be parsimonious, and the interpretation of the model parameters is less straightforward than the usual Generalized Linear Model (GLM) type models, which are prevalent across all scientific disciplines.

While this is a crucial practical distinction, it’s a relatively low-level technical one, as the primary aspect of BN modeling is that it’s a form of graphical modeling – a model of the data’s joint probability distribution. This joint – multidimensional – aspect makes this methodology highly attractive for complex data analysis and sets it apart from more standard regression techniques, such as GLMs, GLMMs, etc., which are only one-dimensional as they assume all covariates are independent. While this assumption is entirely reasonable in a classical experimental design scenario, it’s unrealistic for many observational studies in fields like medicine, veterinary science, ecology, and biology.

Examples

Example 1: Basic Usage

This is a basic example which shows the basic workflow:

``` r library(abn)

Built-in toy dataset with two Gaussian variables G1 and G2, two Binomial variables B1 and B2, and one multinomial variable C

str(g2b2c_data)

Define the distributions of the variables

dists <- list(G1 = "gaussian", B1 = "binomial", B2 = "binomial", C = "multinomial", G2 = "gaussian")

Build the score cache

cacheMLE <- buildScoreCache(data.df = g2b2c_data, data.dists = dists, method = "mle", max.parents = 2)

Find the most probable DAG

dagMP <- mostProbable(score.cache = cacheMLE)

Print the most probable DAG

print(dagMP)

Plot the most probable DAG

plot(dagMP)

Fit the most probable DAG

myfit <- fitAbn(object = dagMP, method = "mle")

Print the fitted DAG

print(myfit) ```

Example 2: Restrict Model Search Space

Based on example 1, we may know that the arc G1->G2 is not possible and that the arc from C -> G2 must be present. This "expert knowledge" can be included in the model by banning the arc from G1 to G2 and retaining the arc from C to G2.

The retain and ban matrices are specified as an adjacency matrix of 0 and 1 entries, where 1 indicates that the arc is banned or retained, respectively. Row and column names must match the variable names in the data set. The corresponding column is a parent of the variable in the row. Each column represents the parents, and the row is the child. For example, the first row of the ban matrix indicates that G1 is banned as a parent of G2.

Further, we can restrict the maximum number of parents per node to 2.

```r

Ban the edge G1 -> G2

banmat <- matrix(0, nrow = 5, ncol = 5, dimnames = list(names(dists), names(dists))) banmat[1, 5] <- 1

retain always the edge C -> G2

retainmat <- matrix(0, nrow = 5, ncol = 5, dimnames = list(names(dists), names(dists))) retainmat[5, 4] <- 1

Limit the maximum number of parents to 2

max.par <- 2

Build the score cache

cacheMLEsmall <- buildScoreCache(data.df = g2b2cdata, data.dists = dists, method = "mle", dag.banned = banmat, dag.retained = retainmat, max.parents = max.par) print(paste("Without restrictions from example 1: ", nrow(cacheMLE$node.defn))) print(paste("With restrictions as in example 2: ", nrow(cacheMLE_small$node.defn)))

```

Example 3: Grouped Data Structures

Depending on the data structure, we may want to control for one-layer clustering, where observations are grouped into a single layer of clusters that are themselves assumed to be independent, but observations within the clusters may be correlated (e.g., students nested within schools, measurements over time for each patient, etc.).

Currently, abn supports only one layer clustering.

```r

Built-in toy data set

str(g2pbcgrp)

Define the distributions of the variables

dists <- list(G1 = "gaussian", P = "poisson", B = "binomial", C = "multinomial", G2 = "gaussian") # group is not among the list of variable distributions

Ban arcs such that C has only B and P as parents

ban.mat <- matrix(0, nrow = 5, ncol = 5, dimnames = list(names(dists), names(dists))) ban.mat[4, 1] <- 1 ban.mat[4, 4] <- 1 ban.mat[4, 5] <- 1

Build the score cache

cache <- buildScoreCache(data.df = g2pbcgrp, data.dists = dists, group.var = "group", dag.banned = ban.mat, method = "mle", max.parents = 2)

Find the most probable DAG

dag <- mostProbable(score.cache = cache)

Plot the most probable DAG

plot(dag)

Fit the most probable DAG

fit <- fitAbn(object = dag, method = "mle")

Plot the fitted DAG

plot(fit)

Print the fitted DAG

print(fit)

```

Example 4: Using INLA vs internal Laplace approximation

Under a Bayesian approach, abn automatically switches to the Integrated Nested Laplace Approximation from the INLA package if the internal Laplace approximation fails to converge. However, we can also force the use of INLA by setting the argument control=list(max.mode.error=100).

The following example shows that the results are very similar. It also shows how to constrain arcs as formula objects and how to specify different parent limits for each node separately.

``` r library(abn)

Subset of the build-in dataset, see ?ex0.dag.data

mydat <- ex0.dag.data[,c("b1","b2","g1","g2","b3","g3")] ## take a subset of cols

setup distribution list for each node

mydists <- list(b1="binomial", b2="binomial", g1="gaussian", g2="gaussian", b3="binomial", g3="gaussian")

Structural constraints

ban arc from b2 to b1

always retain arc from g2 to g1

parent limits - can be specified for each node separately

max.par <- list("b1"=2, "b2"=2, "g1"=2, "g2"=2, "b3"=2, "g3"=2)

now build the cache of pre-computed scores according to the structural constraints

res.c <- buildScoreCache(data.df=mydat, data.dists=mydists, dag.banned= ~b1|b2, dag.retained= ~g1|g2, max.parents=max.par)

repeat but using R-INLA. The mlik's should be virtually identical.

if(requireNamespace("INLA", quietly = TRUE)){ res.inla <- buildScoreCache(data.df=mydat, data.dists=mydists, dag.banned= ~b1|b2, # ban arc from b2 to b1 dag.retained= ~g1|g2, # always retain arc from g2 to g1 max.parents=max.par, control=list(max.mode.error=100)) # force using of INLA

## comparison - very similar difference <- res.c$mlik - res.inla$mlik summary(difference) } ```

Contributing

We greatly appreciate contributions from the community and are excited to welcome you to the development process of the abn package. Here are some guidelines to help you get started:

  1. Seeking Support: If you need help with using the abn package, you can seek support by creating a new issue on our GitHub repository. Please describe your problem in detail and include a minimal reproducible example if possible.

  2. Reporting Issues or Problems: If you encounter any issues or problems with the software, please report them by creating a new issue on our GitHub repository. When reporting an issue, try to include as much detail as possible, including steps to reproduce the issue, your operating system and R version, and any error messages you received.

  3. Software Contributions: We encourage contributions directly via pull requests on our GitHub repository. Before starting your work, please first create an issue describing the contribution you wish to make. This allows us to discuss and agree on the best way to integrate your contribution into the package.

By participating in this project, you agree to abide by our code of conduct. We are committed to making participation in this project a respectful and harassment-free experience for everyone.

Citation

If you use abn in your research, please cite it as follows:

``` r

citation("abn") To cite the software implementation of the R package 'abn' use:

Delucchi M, Furrer R, Kratzer G, Lewis F, Liechti J, Pittavino M, Cherneva K (2024). abn: Modelling Multivariate Data with Additive Bayesian Networks. R package version 3.1.3, https://CRAN.R-project.org/package=abn.

To cite the methodology of the R package 'abn' use:

Kratzer G, Lewis F, Comin A, Pittavino M, Furrer R (2023). “Additive Bayesian Network Modeling with the R Package abn.” Journal of Statistical Software, 105(8), 1-41. doi:10.18637/jss.v105.i08 https://doi.org/10.18637/jss.v105.i08.

To cite the application of mixed-effects ABN use:

Delucchi M, Liechti J, Spinner G, Furrer R (2024). “abn: Additive Bayesian Networks.” Journal of Open Source Software, 9(101), 6822. R package version 3.1.3, https://doi.org/10.21105/joss.06822.

To cite an example of a typical ABN analysis use:

Kratzer, G., Lewis, F.I., Willi, B., Meli, M.L., Boretti, F.S., Hofmann-Lehmann, R., Torgerson, P., Furrer, R. and Hartnack, S. (2020). Bayesian Network Modeling Applied to Feline Calicivirus Infection Among Cats in Switzerland. Frontiers in Veterinary Science, 7, 73 ```

License

The abn package is licensed under the GNU General Public License v3.0.

Code of Conduct

Please note that the abn project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.

Applications

The abn website provides a comprehensive set of documented case studies, numerical accuracy/quality assurance exercises, and additional documentation.

Technical articles

Application articles

Workshops

Causality:

  • 4 December 2018, Beate Sick & Gilles Kratzer of the 1st Causality workshop talk, Bayesian Networks meet Observational data. (UZH, Switzerland)

ABN modeling

Presentations

  • 4 October 2018, talk in Nutricia (Danone). Multivariable analysis: variable and model selection in system epidemiology. (Utrecht, Netherland)

  • 30 May 2018. Brown Bag Seminar in ZHAW. Presentation: Bayesian Networks Learning in a Nutshell. (Winterthur, Switzerland)

Owner

  • Name: Applied Statistics
  • Login: furrer-lab
  • Kind: organization
  • Location: Switzerland

Software developed in the research group of Applied Statistics, led by Prof. Dr. Reinhard Furrer

JOSS Publication

Additive Bayesian Networks
Published
September 30, 2024
Volume 9, Issue 101, Page 6822
Authors
Matteo Delucchi ORCID
Department of Mathematical Modeling and Machine Learning, University of Zurich, Zürich, Switzerland, Centre for Computational Health, Institute of Computational Life Sciences, Zurich University of Applied Sciences (ZHAW), Wädenswil, Switzerland
Jonas I. Liechti ORCID
www.T4D.ch, T4D GmbH, Zurich, Switzerland
Georg R. Spinner ORCID
Centre for Computational Health, Institute of Computational Life Sciences, Zurich University of Applied Sciences (ZHAW), Wädenswil, Switzerland
Reinhard Furrer ORCID
Department of Mathematical Modeling and Machine Learning, University of Zurich, Zürich, Switzerland
Editor
Chris Vernon ORCID
Tags
data science mixed-effects models Bayesian networks graphical models

Citation (CITATION.cff)

cff-version: "1.2.0"
authors:
- family-names: Delucchi
  given-names: Matteo
  orcid: "https://orcid.org/0000-0002-9327-1496"
- family-names: Liechti
  given-names: Jonas I.
  orcid: "https://orcid.org/0000-0003-3447-3060"
- family-names: Spinner
  given-names: Georg R.
  orcid: "https://orcid.org/0000-0001-9640-8155"
- family-names: Furrer
  given-names: Reinhard
  orcid: "https://orcid.org/0000-0002-6319-2332"
contact:
- family-names: Furrer
  given-names: Reinhard
  orcid: "https://orcid.org/0000-0002-6319-2332"
doi: 10.5281/zenodo.13788885
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Delucchi
    given-names: Matteo
    orcid: "https://orcid.org/0000-0002-9327-1496"
  - family-names: Liechti
    given-names: Jonas I.
    orcid: "https://orcid.org/0000-0003-3447-3060"
  - family-names: Spinner
    given-names: Georg R.
    orcid: "https://orcid.org/0000-0001-9640-8155"
  - family-names: Furrer
    given-names: Reinhard
    orcid: "https://orcid.org/0000-0002-6319-2332"
  date-published: 2024-09-30
  doi: 10.21105/joss.06822
  issn: 2475-9066
  issue: 101
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 6822
  title: Additive Bayesian Networks
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.06822"
  volume: 9
title: Additive Bayesian Networks

Papers & Mentions

Total mentions: 6

Attitudes of Austrian veterinarians towards euthanasia in small animal practice: impacts of age and gender on views on euthanasia
Last synced: 3 months ago
Additive Bayesian networks for antimicrobial resistance and potential risk factors in non-typhoidal Salmonella isolates from layer hens in Uganda
Last synced: 3 months ago
Multivariate Analysis of the Determinants of the End-Product Quality of Manure-Based Composts and Vermicomposts Using Bayesian Network Modelling
Last synced: 3 months ago
Improving epidemiologic data analyses through multivariate regression modelling
Last synced: 3 months ago
Nutrient-cycling mechanisms other than the direct absorption from soil may control forest structure and dynamics in poor Amazonian soils
Last synced: 3 months ago
Self-Perceived Health, Objective Health, and Quality of Life among People Aged 50 and Over: Interrelationship among Health Indicators in Italy, Spain, and Greece
Last synced: 3 months ago

GitHub Events

Total
  • Issues event: 34
  • Watch event: 1
  • Delete event: 4
  • Issue comment event: 37
  • Push event: 49
  • Pull request review comment event: 22
  • Pull request review event: 13
  • Pull request event: 37
  • Fork event: 2
  • Create event: 26
Last Year
  • Create event: 26
  • Issues event: 35
  • Watch event: 2
  • Delete event: 4
  • Member event: 1
  • Issue comment event: 37
  • Push event: 50
  • Pull request review comment event: 22
  • Pull request review event: 13
  • Pull request event: 38
  • Fork event: 2

Committers

Last synced: 5 months ago

All Time
  • Total Commits: 311
  • Total Committers: 4
  • Avg Commits per committer: 77.75
  • Development Distribution Score (DDS): 0.408
Past Year
  • Commits: 36
  • Committers: 3
  • Avg Commits per committer: 12.0
  • Development Distribution Score (DDS): 0.528
Top Committers
Name Email Commits
Jonas I. Liechti j****l@t****h 184
Matteo Delucchi m****l@m****m 116
github-actions[bot] 4****] 8
reinhardfurrer r****r@m****h 3
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 131
  • Total pull requests: 133
  • Average time to close issues: 24 days
  • Average time to close pull requests: 4 days
  • Total issue authors: 9
  • Total pull request authors: 5
  • Average comments per issue: 0.78
  • Average comments per pull request: 1.47
  • Merged pull requests: 100
  • Bot issues: 0
  • Bot pull requests: 34
Past Year
  • Issues: 38
  • Pull requests: 57
  • Average time to close issues: 5 days
  • Average time to close pull requests: 3 days
  • Issue authors: 5
  • Pull request authors: 5
  • Average comments per issue: 0.47
  • Average comments per pull request: 0.79
  • Merged pull requests: 43
  • Bot issues: 0
  • Bot pull requests: 13
Top Authors
Issue Authors
  • matteodelucchi (81)
  • j-i-l (32)
  • magalichampion (8)
  • dhvalden (3)
  • brunoholiva (2)
  • abhishektiwari (2)
  • joshua-zh (1)
  • learbuehrer (1)
  • alexrai93 (1)
Pull Request Authors
  • matteodelucchi (50)
  • j-i-l (38)
  • github-actions[bot] (34)
  • magalichampion (10)
  • learbuehrer (1)
Top Labels
Issue Labels
bug (33) enhancement (23) good first issue (14) documentation (9) help wanted (7) code cleaning (4) question (3) wontfix (3) pkgdown::check (2)
Pull Request Labels
CRAN::passed (40) URL::passed (36) version-release (34) linting::failed (27) bug (23) pkgdown::passed (18) CompVignettes::passed (16) enhancement (15) memory::failed (12) documentation (10) CRAN::check (8) memory::check (8) memory::passed (8) release::published (5) CRAN::failed (4) CompVignettes::build (3) linting::check (2) linting::passed (2) pkgdown::failed (2) help wanted (2) good first issue (2) pkgdown::check (2) URL::check (1) macosInstall::passed (1) windowsInstall::passed (1) fedoraInstall::passed (1) ubuntuInstall::passed (1) macos::failed (1) windows::failed (1)

Packages

  • Total packages: 1
  • Total downloads:
    • cran 752 last-month
  • Total docker downloads: 21,777
  • Total dependent packages: 1
  • Total dependent repositories: 2
  • Total versions: 34
  • Total maintainers: 1
cran.r-project.org: abn

Modelling Multivariate Data with Additive Bayesian Networks

  • Versions: 34
  • Dependent Packages: 1
  • Dependent Repositories: 2
  • Downloads: 752 Last month
  • Docker Downloads: 21,777
Rankings
Downloads: 10.9%
Docker downloads count: 12.5%
Average: 15.2%
Dependent packages count: 18.1%
Dependent repos count: 19.2%
Maintainers (1)
Last synced: 4 months ago

Dependencies

DESCRIPTION cran
  • R >= 4.0.0 depends
  • Rcpp * imports
  • Rgraphviz * imports
  • doParallel * imports
  • foreach * imports
  • graph * imports
  • lme4 * imports
  • mclogit * imports
  • methods * imports
  • nnet * imports
  • rjags * imports
  • stringi * imports
  • INLA * suggests
  • Matrix >= 1.6.3 suggests
  • MatrixModels >=0.5.3 suggests
  • R.rsp * suggests
  • RhpcBLASctl * suggests
  • boot * suggests
  • brglm * suggests
  • entropy * suggests
  • knitr * suggests
  • moments * suggests
  • testthat >= 3.0.0 suggests
.github/workflows/CRAN_checks.yml actions
  • actions/checkout v4 composite
  • actions/upload-artifact v4 composite
  • r-lib/actions/setup-r v2 composite
  • r-lib/actions/setup-r-dependencies v2 composite
.github/workflows/publish_release.yml actions
  • actions/checkout v4 composite
  • ncipollo/release-action v1 composite
.github/workflows/quick-testthat.yml actions
  • actions/checkout v4 composite