Science Score: 49.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 3 DOI reference(s) in README -
✓Academic publication links
Links to: zenodo.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (21.3%) to scientific vocabulary
Keywords
archiving
database
dbi
peer-reviewed
r
r-package
rstats
Keywords from Contributors
data-store
git-lfs
rmarkdown
genome
reproducibility
crypto-currency-exchanges
pandoc
literate-programming
taxonomy
weather-data
Last synced: 6 months ago
·
JSON representation
Repository
Archive and unarchive databases as flat text files
Basic Info
- Host: GitHub
- Owner: ropensci
- License: other
- Language: R
- Default Branch: master
- Homepage: https://docs.ropensci.org/arkdb
- Size: 1.08 MB
Statistics
- Stars: 79
- Watchers: 9
- Forks: 7
- Open Issues: 0
- Releases: 8
Topics
archiving
database
dbi
peer-reviewed
r
r-package
rstats
Created over 7 years ago
· Last pushed 10 months ago
Metadata Files
Readme
Changelog
License
Code of conduct
Codemeta
README.Rmd
--- output: github_document --- # arkdb[](https://github.com/ropensci/arkdb/actions) [](https://app.travis-ci.com/ropensci/arkdb) [](https://app.codecov.io/github/ropensci/arkdb?branch=master) [](https://cran.r-project.org/package=arkdb) [](https://github.com/ropensci/software-review/issues/224) [](https://lifecycle.r-lib.org/articles/stages.html) [](https://CRAN.R-project.org/package=arkdb) [](https://doi.org/10.5281/zenodo.1343943) 🗞️ ***PLEASE NOTE*** 🗞️ It is now easy to accomplish `arkdb`'s tasks of moving between larger-than-ram csv/parquet/etc flatfiles and SQL databases using `duckdb`. Even better, `duckdb` supports larger-than-ram DBI operations directly on flatfiles without even importing, better compatibility with a wide range of SQL databases (see extensions), and better performance. See the R package, [`duckdbfs`](https://github.com/cboettig/duckdbfs) for an easy way to use `duckdb` from R. --- ```{r, echo = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>", fig.path = "README-" ) ``` The goal of `arkdb` is to provide a convenient way to move data from large compressed text files (tsv, csv, etc) into any DBI-compliant database connection (e.g. MYSQL, Postgres, SQLite; see [DBI](https://solutions.rstudio.com/db/r-packages/DBI/)), and move tables out of such databases into text files. The key feature of `arkdb` is that files are moved between databases and text files in chunks of a fixed size, allowing the package functions to work with tables that would be much too large to read into memory all at once. There is also functionality for filtering and applying transformation to data as it is extracted from the database. The `arkdb` package is easily extended to use custom read and write methods allowing you to dictate your own output formats. See `R/streamable_table.R` for examples that include using: - Base c/tsv - Apache arrow's parquet - The `readr` package for c/tsv ## Links - A more detailed introduction to package design and use can be found in the package [Vignette](https://docs.ropensci.org/arkdb/articles/arkdb.html) - [Online versions of package documentation](https://docs.ropensci.org/arkdb/) ## Installation You can install arkdb from GitHub with: ```{r gh-installation, eval = FALSE} # install.packages("devtools") devtools::install_github("cboettig/arkdb") ``` # Basic use ```{r message = FALSE} library(arkdb) # additional libraries just for this demo library(dbplyr) library(dplyr) library(fs) ``` ## Creating an archive of a database Consider the `nycflights` database in SQLite: ```{r example} tmp <- tempdir() # Or can be your working directory, "." db <- dbplyr::nycflights13_sqlite(tmp) ``` Create an archive of the database: ```{r} dir <- fs::dir_create(fs::path(tmp, "nycflights")) ark(db, dir, lines = 50000) ``` ## Unarchive Import a list of compressed tabular files (i.e. `*.csv.bz2`) into a local SQLite database: ```{r} files <- fs::dir_ls(dir) new_db <- DBI::dbConnect(RSQLite::SQLite(), fs::path(tmp, "local.sqlite")) unark(files, new_db, lines = 50000) ``` ```{r include=FALSE} disconnect <- function(db){ ## Cleanup if(inherits(db, "DBIConnection")){ DBI::dbDisconnect(db) } else { DBI::dbDisconnect(db$con) } } DBI::dbDisconnect(db) DBI::dbDisconnect(new_db) codemeta::write_codemeta() ``` ## Using filters This package can also be used to generate slices of data that are required for analytical or operational purposes. In the example below we archive to disk only the flight data that occurred in the month of December. It is recommended to use filters on a single table at a time. ```{r, eval=FALSE} ark(db, dir, lines = 50000, tables = "flights", filter_statement = "WHERE month = 12") ``` ## Using callbacks It is possible to use a callback to perform just-in-time data transformations before ark writes your data object to disk in your preferred format. In the example below, we write a simple transformation to convert the flights data `arr_delay` field, from minutes, to hours. It is recommended to use callbacks on a single table at a time. A callback function can be anything you can imagine so long as it returns a data.frame that can be written to disk. ```{r, eval=FALSE} mins_to_hours <- function(data) { data$arr_delay <- data$arr_delay/60 data } ark(db, dir, lines = 50000, tables = "flights", callback = mins_to_hours) ``` ## ark() in parallel There are two strategies for using `ark` in parallel. One is to loop over the tables, re-using the ark function per table in parallel. The other, introduced in 0.0.15, is to use the "window-parallel" method which loops over chunks of your table. This is particularly useful if your tables are very large and can speed up the process significantly. Note: `window-parallel` currently only works in conjunction with `streamable_parquet` ```{r, eval = FALSE} # Strategy 1: Parallel over tables library(arkdb) library(future.apply) plan(multisession) # Any streamable_table method is acceptable future_lapply(vector_of_tables, function(x) ark(db, dir, lines, tables = x)) # Strategy 2: Parallel over chunks of a table library(arkdb) library(future.apply) plan(multisession) ark( db, dir, streamable_table = streamable_parquet(), # required for window-parallel lines = 50000, tables = "flights", method = "window-parallel" ) # Strategy 3: Parallel over tables and chunks of tables library(arkdb) library(future.apply) # 16 core machine for example plan(list(tweak(multisession, n = 4), tweak(multisession, n = 4))) # 4 tables at a time, 4 threads per table future_lapply(vector_of_tables, function(x) { ark( db, dir, streamable_table = streamable_parquet(), # required for window-parallel lines = 50000, tables = x, method = "window-parallel") } ) ``` ## ETLs with arkdb The `arkdb` package can also be used to create a number of ETL pipelines involving text archives or databases given its ability to filter, and use callbacks. In the example below, we leverage `duckdb` to read a fictional folder of files by US state, filter by `var_filtered`, apply a callback transformation `transform_fun` to `var_transformed` save as parquet, and then load a folder of parquet files for analysis with Apache Arrow. ```{r, eval = FALSE} library(arrow) library(duckdb) db <- dbConnect(duckdb::duckdb()) transform_fun <- function(data) { data$var_transformed <- sqrt(data$var_transformed) data } for(state in c("DC", state.abb)) { path <- paste0("path/to/archives/", state, ".gz") ark( db, dir = paste0("output/", state), streamable_table = streamable_parquet(), # parquet files of nline rows lines = 100000, # See: https://duckdb.org/docs/data/csv tables = sprintf("read_csv_auto('%s')", path), compress = "none", # Compression meaningless for parquet as it's already compressed overwrite = T, filenames = state, # Overload tablename filter_statement = "WHERE var_filtered = 1", callback = transform_fun ) } # The result is trivial to read in with arrow ds <- open_dataset("output", partitioning = "state") ``` ----- Please note that this project is released with a [Contributor Code of Conduct](https://ropensci.org/code-of-conduct/). By participating in this project you agree to abide by its terms. [](https://ropensci.org)
Owner
- Name: rOpenSci
- Login: ropensci
- Kind: organization
- Email: info@ropensci.org
- Location: Berkeley, CA
- Website: https://ropensci.org/
- Twitter: rOpenSci
- Repositories: 307
- Profile: https://github.com/ropensci
CodeMeta (codemeta.json)
{
"@context": "https://doi.org/10.5063/schema/codemeta-2.0",
"@type": "SoftwareSourceCode",
"identifier": "arkdb",
"description": "Flat text files provide a robust, compressible, and portable way to store tables from databases. This package provides convenient functions for exporting tables from relational database connections into compressed text files and streaming those text files back into a database without requiring the whole table to fit in working memory.",
"name": "arkdb: Archive and Unarchive Databases Using Flat Files",
"codeRepository": "https://github.com/ropensci/arkdb",
"issueTracker": "https://github.com/ropensci/arkdb/issues",
"license": "https://spdx.org/licenses/MIT",
"version": "0.0.15",
"programmingLanguage": {
"@type": "ComputerLanguage",
"name": "R",
"url": "https://r-project.org"
},
"runtimePlatform": "R version 4.2.1 (2022-06-23)",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"author": [
{
"@type": "Person",
"givenName": "Carl",
"familyName": "Boettiger",
"email": "cboettig@gmail.com",
"@id": "https://orcid.org/0000-0002-1642-628X"
}
],
"contributor": [
{
"@type": "Person",
"givenName": "Richard",
"familyName": "FitzJohn"
},
{
"@type": "Person",
"givenName": "Brandon",
"familyName": "Bertelsen",
"email": "brandon@bertelsen.ca"
}
],
"copyrightHolder": [
{
"@type": "Person",
"givenName": "Carl",
"familyName": "Boettiger",
"email": "cboettig@gmail.com",
"@id": "https://orcid.org/0000-0002-1642-628X"
}
],
"maintainer": [
{
"@type": "Person",
"givenName": "Carl",
"familyName": "Boettiger",
"email": "cboettig@gmail.com",
"@id": "https://orcid.org/0000-0002-1642-628X"
}
],
"softwareSuggestions": [
{
"@type": "SoftwareApplication",
"identifier": "arrow",
"name": "arrow",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=arrow"
},
{
"@type": "SoftwareApplication",
"identifier": "R.utils",
"name": "R.utils",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=R.utils"
},
{
"@type": "SoftwareApplication",
"identifier": "progress",
"name": "progress",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=progress"
},
{
"@type": "SoftwareApplication",
"identifier": "readr",
"name": "readr",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=readr"
},
{
"@type": "SoftwareApplication",
"identifier": "spelling",
"name": "spelling",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=spelling"
},
{
"@type": "SoftwareApplication",
"identifier": "dplyr",
"name": "dplyr",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=dplyr"
},
{
"@type": "SoftwareApplication",
"identifier": "dbplyr",
"name": "dbplyr",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=dbplyr"
},
{
"@type": "SoftwareApplication",
"identifier": "nycflights13",
"name": "nycflights13",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=nycflights13"
},
{
"@type": "SoftwareApplication",
"identifier": "testthat",
"name": "testthat",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=testthat"
},
{
"@type": "SoftwareApplication",
"identifier": "knitr",
"name": "knitr",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=knitr"
},
{
"@type": "SoftwareApplication",
"identifier": "covr",
"name": "covr",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=covr"
},
{
"@type": "SoftwareApplication",
"identifier": "fs",
"name": "fs",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=fs"
},
{
"@type": "SoftwareApplication",
"identifier": "rmarkdown",
"name": "rmarkdown",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=rmarkdown"
},
{
"@type": "SoftwareApplication",
"identifier": "RSQLite",
"name": "RSQLite",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=RSQLite"
},
{
"@type": "SoftwareApplication",
"identifier": "duckdb",
"name": "duckdb",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=duckdb"
},
{
"@type": "SoftwareApplication",
"identifier": "vroom",
"name": "vroom",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=vroom"
},
{
"@type": "SoftwareApplication",
"identifier": "utf8",
"name": "utf8",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=utf8"
},
{
"@type": "SoftwareApplication",
"identifier": "future.apply",
"name": "future.apply",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=future.apply"
}
],
"softwareRequirements": {
"1": {
"@type": "SoftwareApplication",
"identifier": "R",
"name": "R",
"version": ">= 4.0"
},
"2": {
"@type": "SoftwareApplication",
"identifier": "DBI",
"name": "DBI",
"provider": {
"@id": "https://cran.r-project.org",
"@type": "Organization",
"name": "Comprehensive R Archive Network (CRAN)",
"url": "https://cran.r-project.org"
},
"sameAs": "https://CRAN.R-project.org/package=DBI"
},
"3": {
"@type": "SoftwareApplication",
"identifier": "tools",
"name": "tools"
},
"4": {
"@type": "SoftwareApplication",
"identifier": "utils",
"name": "utils"
},
"SystemRequirements": null
},
"fileSize": "187.124KB"
}
GitHub Events
Total
- Issues event: 6
- Watch event: 2
- Issue comment event: 5
- Push event: 2
- Fork event: 1
Last Year
- Issues event: 6
- Watch event: 2
- Issue comment event: 5
- Push event: 2
- Fork event: 1
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Carl Boettiger | c****g@g****m | 166 |
| Brandon Bertelsen | b****n@b****a | 42 |
| Carl Boettiger | c****g@g****m | 5 |
| Noam Ross | r****s@e****g | 2 |
| olivroy | 5****y | 1 |
| Noam Ross | n****s@g****m | 1 |
| Jeroen Ooms | j****s@g****m | 1 |
| Hadley Wickham | h****m@g****m | 1 |
| ropenscibot | m****t@g****m | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 29
- Total pull requests: 29
- Average time to close issues: about 1 year
- Average time to close pull requests: 29 days
- Total issue authors: 11
- Total pull request authors: 6
- Average comments per issue: 3.0
- Average comments per pull request: 1.28
- Merged pull requests: 23
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 1
- Pull requests: 0
- Average time to close issues: about 19 hours
- Average time to close pull requests: N/A
- Issue authors: 1
- Pull request authors: 0
- Average comments per issue: 4.0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- cboettig (10)
- 1beb (6)
- GitHunter0 (3)
- noamross (3)
- khondula (1)
- talegari (1)
- HenrikBengtsson (1)
- frank-stat (1)
- VladPerervenko (1)
- krlmlr (1)
- spkaluzny (1)
Pull Request Authors
- cboettig (17)
- 1beb (6)
- noamross (3)
- hadley (1)
- richfitz (1)
- olivroy (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- cran 453 last-month
- Total docker downloads: 41,971
- Total dependent packages: 2
- Total dependent repositories: 2
- Total versions: 17
- Total maintainers: 1
cran.r-project.org: arkdb
Archive and Unarchive Databases Using Flat Files
- Homepage: https://github.com/ropensci/arkdb
- Documentation: http://cran.r-project.org/web/packages/arkdb/arkdb.pdf
- License: MIT + file LICENSE
-
Latest release: 0.0.18
published about 2 years ago
Rankings
Docker downloads count: 0.6%
Stargazers count: 4.7%
Forks count: 9.6%
Average: 11.5%
Dependent packages count: 13.7%
Dependent repos count: 19.2%
Downloads: 21.3%
Maintainers (1)
Last synced:
6 months ago
Dependencies
DESCRIPTION
cran
- R >= 4.0 depends
- DBI * imports
- tools * imports
- utils * imports
- R.utils * suggests
- RSQLite * suggests
- arrow * suggests
- covr * suggests
- dbplyr * suggests
- dplyr * suggests
- duckdb * suggests
- fs * suggests
- future.apply * suggests
- knitr * suggests
- nycflights13 * suggests
- progress * suggests
- readr * suggests
- rmarkdown * suggests
- spelling * suggests
- testthat * suggests
- utf8 * suggests
- vroom * suggests
.github/workflows/R-CMD-check.yaml
actions
- actions/cache v1 composite
- actions/checkout v2 composite
- actions/upload-artifact main composite
- r-lib/actions/setup-pandoc v2-branch composite
- r-lib/actions/setup-r v2-branch composite