adaptr

adaptr: an R package for simulating and comparing adaptive clinical trials - Published in JOSS (2022)

https://github.com/inceptdk/adaptr

Science Score: 93.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 11 DOI reference(s) in README and JOSS metadata
  • Academic publication links
    Links to: joss.theoj.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
    Published in Journal of Open Source Software
Last synced: 6 months ago · JSON representation

Repository

Adaptive Trial Simulator

Basic Info
Statistics
  • Stars: 14
  • Watchers: 3
  • Forks: 2
  • Open Issues: 2
  • Releases: 8
Created about 4 years ago · Last pushed almost 2 years ago
Metadata Files
Readme Changelog License

README.Rmd

---
output: github_document
---



```{r, include = FALSE}
knitr::opts_chunk$set(
  collapse = TRUE,
  comment = "#>",
  fig.path = "man/figures/README-",
  out.width = "100%"
)
```

# adaptr 


[![CRAN status](https://www.r-pkg.org/badges/version/adaptr)](https://cran.r-project.org/package=adaptr)
[![R-CMD-check](https://github.com/INCEPTdk/adaptr/workflows/R-CMD-check/badge.svg)](https://github.com/INCEPTdk/adaptr/actions/)
[![status](https://joss.theoj.org/papers/10.21105/joss.04284/status.svg)](https://joss.theoj.org/papers/10.21105/joss.04284)
[![codecov](https://codecov.io/gh/INCEPTdk/adaptr/branch/main/graph/badge.svg)](https://app.codecov.io/gh/INCEPTdk/adaptr/)
![total downloads from RStudio mirror](https://cranlogs.r-pkg.org/badges/grand-total/adaptr)


The `adaptr` package simulates adaptive (multi-arm, multi-stage) clinical trials
using adaptive stopping, adaptive arm dropping and/or response-adaptive
randomisation.

The package has been developed as part of the
[INCEPT (Intensive Care Platform Trial) project](https://incept.dk/), primarily
supported by a grant from [Sygeforsikringen "danmark"](https://www.sygeforsikring.dk/).

## Resources

* [Website](https://inceptdk.github.io/adaptr/) - stand-alone website with full
package documentation
* [adaptr: an R package for simulating and comparing adaptive clinical trials](https://doi.org/10.21105/joss.04284) -
article in the Journal of Open Source Software describing the package
* [An overview of methodological considerations regarding adaptive stopping, arm dropping and randomisation in clinical trials](https://doi.org/10.1016/j.jclinepi.2022.11.002) - article in Journal of
Clinical Epidemiology describing key methodological considerations in adaptive
trials with description of the workflow and a simulation-based example using the
package

**Examples:**

* [Effects of duration of follow-up and lag in data collection on the performance of adaptive clinical trials](https://doi.org/10.1002/pst.2342) - article in Pharmaceutical
Statistics describing a simulation study (with code) using `adaptr` to assess
the performance of adaptive clinical trials according to different
follow-up/data collection lags.
* [Effects of sceptical priors on the performance of adaptive clinical trials with binary outcomes](https://doi.org/10.1002/pst.2387) -
article in Pharmaceutical Statistics describing a simulation study (with code)
using `adaptr` to assess the performance of adaptive clinical trials according to different
sceptical priors.

## Installation

The easiest way is to install from CRAN directly:

```{r eval = FALSE}
install.packages("adaptr")
```

Alternatively, you can install the **development version** from GitHub - this
requires the *remotes*-package installed. The development version may contain
additional features not yet available in the CRAN version, but may not be stable
or fully documented:

```{r eval = FALSE}
# install.packages("remotes") 
remotes::install_github("INCEPTdk/adaptr@dev")
```

## Usage and workflow overview

The central functionality of `adaptr` and the typical workflow is illustrated
here.

### Setup

First, the package is loaded and a cluster of parallel workers is initiated by
the `setup_cluster()` function to facilitate parallel computing:

```{r}
library(adaptr)

setup_cluster(2)
```

### Specify trial design

Setup a trial specification (defining the trial design and scenario) using
the general `setup_trial()` function, or one of the special case variants using
default priors `setup_trial_binom()` (for binary, binomially distributed
outcomes; used in this example) or `setup_trial_norm()` (for continuous,
normally distributed outcomes).

```{r}
# Setup a trial using a binary, binomially distributed, undesirable outcome
binom_trial <- setup_trial_binom(
  arms = c("Arm A", "Arm B", "Arm C"),
  # Scenario with identical outcomes in all arms
  true_ys = c(0.25, 0.25, 0.25),
  # Response-adaptive randomisation with minimum 20% allocation in all arms
  min_probs = rep(0.20, 3),
  # Number of patients with data available at each analysis
  data_looks = seq(from = 300, to = 2000, by = 100),
  # Number of patients randomised at each analysis (higher than the numbers
  # with data, except at last look, due to follow-up/data collection lag)
  randomised_at_looks = c(seq(from = 400, to = 2000, by = 100), 2000),
  # Stopping rules for inferiority/superiority not explicitly defined
  # Stop for equivalence at > 90% probability of differences < 5 %-points
  equivalence_prob = 0.9,
  equivalence_diff = 0.05
)

# Print trial specification
print(binom_trial, prob_digits = 3)
```

### Calibration

In the example trial specification, there are no true between-arm differences,
and stopping rules for inferiority and superiority are not explicitly defined.
This is intentional, as these stopping rules will be calibrated to obtain a
desired probability of stopping for superiority in the scenario with no
between-arm differences (corresponding to the Bayesian type 1 error rate). Trial
specifications do not necessarily have to be calibrated, and simulations can be
run directly using the `run_trials()` function covered below (or `run_trial()`
for a single simulation).

Calibration of a trial specification is done using the `calibrate_trial()`
function, which defaults to calibrate constant, symmetrical stopping rules
for inferiority and superiority (expecting a trial specification with
identical outcomes in each arm), but can be used to calibrate any parameter in a
trial specification towards any performance metric.

```{r}
# Calibrate the trial specification
calibrated_binom_trial <- calibrate_trial(
  trial_spec = binom_trial,
  n_rep = 1000, # 1000 simulations for each step (more generally recommended)
  base_seed = 4131, # Base random seed (for reproducible results)
  target = 0.05, # Target value for calibrated metric (default value)
  search_range = c(0.9, 1), # Search range for superiority stopping threshold
  tol = 0.01, # Tolerance range
  dir = -1 # Tolerance range only applies below target
)

# Print result (to check if calibration is successful)
calibrated_binom_trial
```

The calibration is successful - the calibrated, constant stopping threshold for
superiority is printed with the results (`r calibrated_binom_trial$best_x`) and
can be extracted using `calibrated_binom_trial$best_x`. Using the default
calibration functionality, the calibrated, constant stopping threshold for
inferiority is symmetrical, i.e., `1 - stopping threshold for superiority`
(`r 1 - calibrated_binom_trial$best_x`). The calibrated trial specification
may be extracted using `calibrated_binom_trial$best_trial_spec` and, if printed,
will also include the calibrated stopping thresholds.

Calibration results may be saved (and reloaded) by using the `path` argument, to
avoid unnecessary repeated simulations.

### Summarising results

The results of the simulations using the calibrated trial specification
conducted during the calibration procedure may be extracted using
`calibrated_binom_trial$best_sims`. These results can be summarised with several
functions. Most of these functions support different 'selection strategies' for
simulations not ending with superiority, i.e., performance metrics can be
calculated assuming different arms would be used in clinical practice if no arm
is ultimately superior.

The `check_performance()` function summarises performance metrics in a tidy
`data.frame`, with uncertainty measures (bootstrapped confidence intervals) if
requested. Here, performance metrics are calculated considering the 'best' arm
(i.e., the one with the highest probability of being overall best) selected in
simulations not ending with superiority:

```{r}
# Calculate performance metrics with uncertainty measures
binom_trial_performance <- check_performance(
  calibrated_binom_trial$best_sims,
  select_strategy = "best",
  uncertainty = TRUE, # Calculate uncertainty measures
  n_boot = 1000, # 1000 bootstrap samples (more typically recommended)
  ci_width = 0.95, # 95% confidence intervals (default)
  boot_seed = "base" # Use same random seed for bootstrapping as for simulations
)

# Print results 
print(binom_trial_performance, digits = 2)
```
Similar results in `list` format (without uncertainty measures) can be obtained
using the `summary()` method, which comes with a `print()` method providing
formatted results:

```{r}
binom_trial_summary <- summary(
  calibrated_binom_trial$best_sims,
  select_strategy = "best"
)

print(binom_trial_summary)
```

Individual simulation results may be extracted in a tidy `data.frame` using
`extract_results()`.

Finally, the probabilities of different remaining arms and
their statuses (with uncertainty) at the last adaptive analysis can be
summarised using the `check_remaining_arms()` function.

### Visualising results

Several visualisation functions are included (all are optional, and all require
the `ggplot2` package installed).

Convergence and stability of one or more performance metrics may be visually
assessed using `plot_convergence()` function:

```{r}
plot_convergence(
  calibrated_binom_trial$best_sims,
  metrics = c("size mean", "prob_superior", "prob_equivalence"),
  # select_strategy can be specified, but does not affect the chosen metrics
)
```

The empirical cumulative distribution functions for continuous performance
metrics may also be visualised:

```{r}
plot_metrics_ecdf(
  calibrated_binom_trial$best_sims, 
  metrics = "size"
)
```

The status probabilities for the overall trial (or for specific arms) according
to trial progress can be visualised using the `plot_status()` function:

```{r}
# Overall trial status probabilities
plot_status(
  calibrated_binom_trial$best_sims,
  x_value = "total n" # Total number of randomised patients at X-axis
)
```

Finally, various metrics may be summarised over the progress of one or multiple
trial simulations using the `plot_history()` function, which requires non-sparse
results (the `sparse` argument must be `FALSE` in `calibrate_trials()`,
`run_trials()`, or `run_trial()`, leading to additional results being saved).

### Use calibrated stopping thresholds in another scenario

The calibrated stopping thresholds (calibrated in a scenario with no between-arm
differences) may be used to run simulations with the same overall trial
specification, but according to a different scenario (i.e., with between-arm
differences present) to assess performance metrics (including the Bayesian
analogue of power).

First, a new trial specification is setup using the same settings as before,
except for between-arm differences and the calibrated stopping thresholds:

```{r}
binom_trial_calib_diff <- setup_trial_binom(
  arms = c("Arm A", "Arm B", "Arm C"),
  true_ys = c(0.25, 0.20, 0.30), # Different outcomes in the arms
  min_probs = rep(0.20, 3),
  data_looks = seq(from = 300, to = 2000, by = 100),
  randomised_at_looks = c(seq(from = 400, to = 2000, by = 100), 2000),
  # Stopping rules for inferiority/superiority explicitly defined
  # using the calibration results
  inferiority = 1 - calibrated_binom_trial$best_x,
  superiority = calibrated_binom_trial$best_x,
  equivalence_prob = 0.9,
  equivalence_diff = 0.05
)
```

Simulations using the trial specification with calibrated stopping thresholds
and differences present can then be conducted using the `run_trials()` function
and performance metrics calculated as above:

```{r}
binom_trial_diff_sims <- run_trials(
  binom_trial_calib_diff,
  n_rep = 1000, # 1000 simulations (more generally recommended)
  base_seed = 1234 # Reproducible results
)

check_performance(
  binom_trial_diff_sims,
  select_strategy = "best",
  uncertainty = TRUE,
  n_boot = 1000, # 1000 bootstrap samples (more typically recommended)
  ci_width = 0.95,
  boot_seed = "base"
)
```

Again, simulations may be saved and reloaded using the `path` argument.

Similarly, overall trial statuses for the scenario with differences can be
visualised:

```{r}
plot_status(binom_trial_diff_sims, x_value = "total n")
```

## Issues and enhancements

We use the [GitHub issue tracker](https://github.com/INCEPTdk/adaptr/issues) for
all bug/issue reports and proposals for enhancements. 

## Contributing

We welcome contributions directly to the code to improve performance as well
as new functionality. For the latter, please first explain and motivate it in an
[issue](https://github.com/INCEPTdk/adaptr/issues).

Changes to the code base should follow these steps:

- [Fork](https://docs.github.com/en/get-started/quickstart/fork-a-repo) the repository
- [Make a branch](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-and-deleting-branches-within-your-repository) with an appropriate name in your fork
- Implement changes in your fork, make sure it passes R CMD check (with neither
errors, warnings, nor notes) and add a bullet at the top of NEWS.md with a short
description of the change, your GitHub handle and the id of the pull request
implementing the change (check the `NEWS.md` file to see the formatting)
- Create a [pull request](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork) into the `dev` branch
of `adaptr`

## Citation

If you use the package, please consider citing it:

```{r}
citation(package = "adaptr")
```

Owner

  • Name: INCEPT
  • Login: INCEPTdk
  • Kind: organization
  • Email: contact@incept.dk
  • Location: Denmark

The Intensive Care Platform Trial research programme

JOSS Publication

adaptr: an R package for simulating and comparing adaptive clinical trials
Published
April 27, 2022
Volume 7, Issue 72, Page 4284
Authors
Anders Granholm ORCID
Department of Intensive Care, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark
Aksel Karl Georg Jensen ORCID
Section of Biostatistics, Department of Public Health, University of Copenhagen, Copenhagen, Denmark
Theis Lange ORCID
Section of Biostatistics, Department of Public Health, University of Copenhagen, Copenhagen, Denmark
Benjamin Skov Kaas-Hansen ORCID
Department of Intensive Care, Copenhagen University Hospital - Rigshospitalet, Copenhagen, Denmark, Section of Biostatistics, Department of Public Health, University of Copenhagen, Copenhagen, Denmark
Editor
Fabian Scheipl ORCID
Tags
R software adaptive trials clinical trials trial design comparison

GitHub Events

Total
  • Issues event: 1
  • Watch event: 5
  • Issue comment event: 1
Last Year
  • Issues event: 1
  • Watch event: 5
  • Issue comment event: 1

Committers

Last synced: 7 months ago

All Time
  • Total Commits: 249
  • Total Committers: 2
  • Avg Commits per committer: 124.5
  • Development Distribution Score (DDS): 0.245
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Anders Granholm a****m@r****k 188
Benjamin Skov Kaas-Hansen e****n@h****m 61
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 2
  • Total pull requests: 12
  • Average time to close issues: N/A
  • Average time to close pull requests: 1 day
  • Total issue authors: 2
  • Total pull request authors: 4
  • Average comments per issue: 0.5
  • Average comments per pull request: 0.17
  • Merged pull requests: 8
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 1
  • Pull request authors: 0
  • Average comments per issue: 1.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • brockk (1)
  • agranholm (1)
Pull Request Authors
  • agranholm (7)
  • epiben (3)
  • hadley (1)
  • fabian-s (1)
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • cran 330 last-month
  • Total docker downloads: 21,613
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 8
  • Total maintainers: 1
cran.r-project.org: adaptr

Adaptive Trial Simulator

  • Versions: 8
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 330 Last month
  • Docker Downloads: 21,613
Rankings
Forks count: 17.8%
Stargazers count: 24.2%
Average: 29.0%
Dependent packages count: 29.8%
Dependent repos count: 35.5%
Downloads: 37.6%
Maintainers (1)
Last synced: 6 months ago

Dependencies

DESCRIPTION cran
  • parallel * imports
  • stats * imports
  • utils * imports
  • covr * suggests
  • ggplot2 * suggests
  • knitr * suggests
  • rmarkdown * suggests
  • testthat * suggests
  • vdiffr * suggests
.github/workflows/R-CMD-check.yaml actions
  • actions/checkout v1 composite
  • r-lib/actions/check-r-package v2 composite
  • r-lib/actions/setup-pandoc v2 composite
  • r-lib/actions/setup-r v2 composite
  • r-lib/actions/setup-r-dependencies v2 composite
.github/workflows/downloads-history-plot.yaml actions
  • actions/checkout v2 composite
  • actions/upload-artifact v3 composite
  • r-lib/actions/setup-r v2 composite
.github/workflows/pkgdown.yaml actions
  • JamesIves/github-pages-deploy-action 4.1.4 composite
  • actions/checkout v2 composite
  • r-lib/actions/setup-pandoc v2 composite
  • r-lib/actions/setup-r v2 composite
  • r-lib/actions/setup-r-dependencies v2 composite
.github/workflows/test-coverage.yaml actions
  • actions/checkout v2 composite
  • r-lib/actions/setup-r v2 composite
  • r-lib/actions/setup-r-dependencies v2 composite