Recent Releases of arkdb
arkdb - v0.0.14
arkdb 0.0.14
- Patch for test suite for Solaris.
arrowpackage installs on Solaris, but functions do not actually run correctly since the C++ libraries have not been set up properly on Solaris.
arkdb 0.0.13
- Added ability to name output files directly.
- Add warning when users specify compression for parquet files.
- Added callback functionality to the
arkfunction. Allowing users to perform transformations or recodes before chunked data.frames are saved to disk. - Added ability to filter databases by allowing users to specify a "WHERE" clause.
- Added parquet as an streamable_table format, allowing users to
arkto parquet instead of a text format.
- R
Published by cboettig over 4 years ago
arkdb - arkdb: Archive and Unarchive Databases Using Flat Files
arkdb 0.0.11
- make cached connection opt-out instead of applying only to readonly. This allows cache to work on read-write connections by default. This also avoids the condition of a connection being garbage-collected when functions call localdb internally.
arkdb 0.0.10
- Better handling of readonly vs readwrite connections. Only caches
read_only connections.
- includes optional support for MonetDBLite
arkdb 0.0.8
- Another bugfix for dplyr 2.0.0 release
arkdb 0.0.7
- bugfix for upcoming dplyr 2.0.0 release
arkdb 0.0.6
- support vroom as an opt-in streamable table
- export
process_chunks - Add mechanism to attempt a bulk importer, when available (#27)
- Bugfix for case when text contains
#characters in base parser (#28) - lighten core dependencies. Fully recursive dependencies include only 4
non-base packages now, as
progressis now optional. - Use "magic numbers" instead of extensions to guess compression type. (NOTE: requires that file is local and not a URL)
- Now that
duckdbis on CRAN andMonetDBLiteisn't, drop built-in support forMonetDBLitein favor ofduckdbalone.
- R
Published by cboettig almost 5 years ago
arkdb - arkdb: Archive and Unarchive Databases Using Flat Files
The goal of arkdb is to provide a convenient way to move data from large compressed text files (tsv, csv, etc) into any DBI-compliant database connection (e.g. MYSQL, Postgres, SQLite; see DBI), and move tables out of such databases into text files. The key feature of arkdb is that files are moved between databases and text files in chunks of a fixed size, allowing the package functions to work with tables that would be much too large to read into memory all at once.
v0.0.5 Changes
ark()'s defaultkeep-openmethod would cut off header names for Postgres connections (due to variation in the behavior of SQL queries withLIMIT 0.) The issue is now resolved by accessing the header in a more robust, general way.
- R
Published by cboettig over 7 years ago
arkdb - arkdb: Archive and Unarchive Databases Using Flat Files
The goal of arkdb is to provide a convenient way to move data from large compressed text files (tsv, csv, etc) into any DBI-compliant database connection (e.g. MYSQL, Postgres, SQLite; see DBI), and move tables out of such databases into text files. The key feature of arkdb is that files are moved between databases and text files in chunks of a fixed size, allowing the package functions to work with tables that would be much too large to read into memory all at once.
v0.0.4 Changes
unark()will strip out non-compliant characters in table names by default.unark()gains the optional argumenttablenames, allowing the user to specify the corresponding table names manually, rather than enforcing they correspond with the incoming file names. #18-
unark()gains the argumentencoding, allowing users to directly set the encoding of incoming files. Previously this could only be set by settingoptions(encoding), which will still work as well. SeeFAO.Rexample inexamplesfor an illustration. unark()will now attempt to guess which streaming parser to use (e.gcsvortsv) based on the file extension pattern, rather than defaulting to atsvparser. (ark()still defaults to exporting in the more portabletsvformat).
- R
Published by cboettig over 7 years ago
arkdb - arkdb: Archive and Unarchive Databases Using Flat Files
The goal of arkdb is to provide a convienent way to move data from large compressed text files (tsv, csv, etc) into any DBI-compliant database connection (e.g. MYSQL, Postgres, SQLite; see DBI), and move tables out of such databases into text files. The key feature of arkdb is that files are moved between databases and text files in chunks of a fixed size, allowing the package functions to work with tables that would be much to large to read into memory all at once.
v0.0.3 Changes
- Remove dependency on utils::askYesNo for backward compatibility, #17
- R
Published by cboettig over 7 years ago
arkdb - arkdb: Archive and Unarchive Databases Using Flat Files
The goal of arkdb is to provide a convienent way to move data from large compressed text files (tsv, csv, etc) into any DBI-compliant database connection (e.g. MYSQL, Postgres, SQLite; see DBI), and move tables out of such databases into text files. The key feature of arkdb is that files are moved between databases and text files in chunks of a fixed size, allowing the package functions to work with tables that would be much to large to read into memory all at once.
v0.0.2 Changes
- Initial CRAN release
- Ensure the suggested dependency MonetDBLite is available before running unit test using it.
- R
Published by cboettig over 7 years ago
arkdb - arkdb: Archive and Unarchive Databases Using Flat Files
The goal of arkdb is to provide a convienent way to move data from large compressed text files (tsv, csv, etc) into any DBI-compliant database connection (e.g. MYSQL, Postgres, SQLite; see DBI), and move tables out of such databases into text files. The key feature of arkdb is that files are moved between databases and text files in chunks of a fixed size, allowing the package functions to work with tables that would be much to large to read into memory all at once.
- R
Published by cboettig over 7 years ago
arkdb - arkdb: Archive and Unarchive Databases Using Flat Files
The goal of arkdb is to provide a convienent way to move data from large compressed text files (tsv, csv, etc) into any DBI-compliant database connection (e.g. MYSQL, Postgres, SQLite; see DBI), and move tables out of such databases into text files. The key feature of arkdb is that files are moved between databases and text files in chunks of a fixed size, allowing the package functions to work with tables that would be much to large to read into memory all at once.
- R
Published by cboettig over 7 years ago