Rasusa
Rasusa: Randomly subsample sequencing reads to a specified coverage - Published in JOSS (2022)
Science Score: 93.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 18 DOI reference(s) in README and JOSS metadata -
✓Academic publication links
Links to: joss.theoj.org -
○Committers with academic emails
-
○Institutional organization owner
-
✓JOSS paper metadata
Published in Journal of Open Source Software
Keywords
Keywords from Contributors
Repository
Randomly subsample sequencing reads or alignments
Basic Info
- Host: GitHub
- Owner: mbhall88
- License: mit
- Language: Rust
- Default Branch: main
- Homepage: https://doi.org/10.21105/joss.03941
- Size: 13.5 MB
Statistics
- Stars: 237
- Watchers: 5
- Forks: 19
- Open Issues: 2
- Releases: 16
Topics
Metadata Files
README.md

Randomly subsample sequencing reads or alignments.
Hall, M. B., (2022). Rasusa: Randomly subsample sequencing reads to a specified coverage. Journal of Open Source Software, 7(69), 3941, https://doi.org/10.21105/joss.03941
Table of Contents
- Table of Contents
- Motivation
- Install
- Usage
- Benchmark
- Contributing
- Citing
Motivation
I couldn't find a tool for subsampling fastq reads that met my requirements. All the
strategies I could find fell short as they either just wanted a number or percentage of
reads to subsample to or, if they did subsample to a coverage, they assume all reads are
the same size (i.e Illumina). As I mostly work with long-read data this posed a problem
if I wanted to subsample a file to certain coverage, as length of reads was never taken
into account. rasusa addresses this shortcoming.
A workaround I had been using for a while was using filtlong. It was
simple enough, I just figure out the number of bases I need to achieve a (theoretical)
coverage for my sample. Say I have a fastq from an E. coli sample with 5 million reads
and I want to subset it to 50x coverage. I just need to multiply the expected size of
the sample's genome, 4.6 million base pairs, by the coverage I want and I have my target
bases - 230 million base pairs. In filtlong, I can do the following
sh
target=230000000
filtlong --target_bases "$target" reads.fq > reads.50x.fq
However, this is technically not the intended function of filtlong; it's a quality
filtering tool. What you get in the end is a subset of the "highest scoring"
reads at a (theoretical) coverage of 50x. Depending on your circumstances, this might be
what you want. However, you bias yourself towards the best/longest reads in the dataset
- not a fair representation of your dataset as a whole. There is also the possibility
of favouring regions of the genome that produce longer/higher quality reads. De Maio
et al. even found that by randomly subsampling nanopore reads you achieve
better genome assemblies than if you had filtered.
So, depending on your circumstances, an unbiased subsample of your reads might be what
you need. And if this is the case, rasusa has you covered.
Install
tl;dr: precompiled binary
```shell curl -sSL rasusa.mbh.sh | sh
or with wget
wget -nv -O - rasusa.mbh.sh | sh ```
You can also pass options to the script like so
``` $ curl -sSL rasusa.mbh.sh | sh -s -- --help install.sh [option]
Fetch and install the latest version of rasusa, if rasusa is already installed it will be updated to the latest version.
Options -V, --verbose Enable verbose output for the installer
-f, -y, --force, --yes
Skip the confirmation prompt during installation
-p, --platform
Override the platform identified by the installer [default: apple-darwin]
-b, --bin-dir
Override the bin installation directory [default: /usr/local/bin]
-a, --arch
Override the architecture identified by the installer [default: x86_64]
-B, --base-url
Override the base URL used for downloading releases [default: https://github.com/mbhall88/rasusa/releases]
-h, --help
Display this help message
```
cargo
Prerequisite: rust toolchain (min. v1.74.1)
sh
cargo install rasusa
conda
Prerequisite: conda (and bioconda channel correctly set up)
sh
conda install rasusa
Thank you to Devon Ryan (@dpryan79) for help debugging the bioconda recipe.
Container
Docker images are hosted at quay.io. For versions 0.3.0 and earlier, the images were hosted on Dockerhub.
singularity
Prerequisite: singularity
sh
URI="docker://quay.io/mbhall88/rasusa"
singularity exec "$URI" rasusa --help
The above will use the latest version. If you want to specify a version then use a tag (or commit) like so.
sh
VERSION="0.8.0"
URI="docker://quay.io/mbhall88/rasusa:${VERSION}"
docker
Prerequisite: docker
sh
docker pull quay.io/mbhall88/rasusa
docker run quay.io/mbhall88/rasusa --help
You can find all the available tags on the quay.io repository. Note: versions prior to 0.4.0 were housed on Docker Hub.
homebrew
Prerequisite: homebrew
sh
brew install rasusa
Build locally
Prerequisite: rust toolchain
```sh git clone https://github.com/mbhall88/rasusa.git cd rasusa cargo build --release target/release/rasusa --help
if you want to check everything is working ok
cargo test --all ```
Usage
Basic usage - reads
Subsample fastq reads
rasusa reads --coverage 30 --genome-size 4.6mb in.fq
The above command will output the subsampled file to stdout.
Or, if you have paired Illumina
rasusa reads --coverage 30 --genome-size 4g -o out.r1.fq -o out.r2.fq r1.fq r2.fq
For more details on the above options, and additional options, see below.
Basic usage - alignments
Subsample alignments
rasusa aln --coverage 30 in.bam | samtools sort -o out.bam
this will subsample each position in the alignment to 30x coverage.
Required parameters
There are three required options to run rasusa reads.
Input
This positional argument specifies the file(s) containing the reads or alignments you would like to subsample. The
file(s) must be valid fasta or fastq format for the reads command and can be compressed (with a tool such as
gzip). For the aln command, the file must be a valid indexed SAM/BAM file.
If two files are passed to reads, rasusa will assume they are paired-end reads.
Bash wizard tip 🧙: Let globs do the work for you
r*.fq
Coverage
-c, --coverage
Not required if
--basesis present forreads
This option is used to determine the minimum coverage to subsample the reads to. For the reads command, it can
be specified as an integer (100), a decimal/float (100.7), or either of the previous
suffixed with an 'x' (100x). For the aln command, it is an integer only.
Due to the method for determining how many bases are required to achieve the
desired coverage in the reads command, the actual coverage, in the end, could be slightly higher than
requested. For example, if the last included read is very long. The log messages should
inform you of the actual coverage in the end.
For the aln command, the coverage is the minimum number of reads that should be present at each position in the
alignment. If a position has fewer than the requested number of reads, all reads at that position will be included. In
addition, there will be (small) regions with more than the requested number of reads - usually localised to where the
alignment of a read ends. This is
because when the alignment of a selected read ends, the next read is selected based on it spanning the end of the
previous alignment.
When selecting this next alignment, we preference alignments whose start is closest to the end of the previous
alignment, ensuring minimal overlap with the previous alignment. See the below screenshot from IGV for a visual example.

Genome size
-g, --genome-size
Not valid for
alnNot required if
--basesis present forreads
The genome size of the input is also required. It is used to determine how many bases
are necessary to achieve the desired coverage. This can, of course, be as precise or
rough as you like.
Genome size can be passed in many ways. As a plain old integer (1600), or with a metric
suffix (1.6kb). All metric suffixes can have an optional 'b' suffix and be lower, upper,
or mixed case. So 'Kb', 'kb' and 'k' would all be inferred as 'kilo'. Valid metric
suffixes include:
- Base (b) - multiplies by 1
- Kilo (k) - multiplies by 1,000
- Mega (m) - multiplies by 1,000,000
- Giga (g) - multiplies by 1,000,000,000
- Tera (t) - multiplies by 1,000,000,000,000
Alternatively, a FASTA/Q index file can be given and the genome size will be set to the sum of all reference sequences in it.
[!TIP] If you want to use
rasusain a scenario where you don't know what the genome size is, such as in an automated pipeline that can take in any kind of organism, you could estimate the genome size with something likelrge(#shamelessplug).
$ gsize=$(lrge reads.fq) $ rasusa reads -g $gsize -c 10 reads.fqlrgeis designed for long reads. If you want to estimate the genome size from short reads, you could use something like Mash or GenomeScope2. See thelrgedocs for examples of how Mash/GenomeScope2 can be used for this task.
Optional parameters
Output
-o, --output
reads
NOTE: This parameter is required if passing paired Illumina data to reads.
By default, rasusa will output the subsampled file to stdout (if one file is given).
If you would prefer to specify an output file path, then use this option.
Output for Illumina paired files must be specified using --output twice - -o out.r1.fq -o out.r2.fq
The ordering of the output files is assumed to be the same as the input.
Note: The output will always be in the same format as the input. You cannot pass fastq
as input and ask for fasta as output.
rasusa reads will also attempt to automatically infer whether compression of the output
file(s) is required. It does this by detecting any of the supported extensions:
.gz: will compress the output withgzip.bzor.bz2: will compress the output withbzip2.lzma: will compress the output with thexzLZMA algorithm
aln
For the aln command, the output file format will be the same as the input if writing to stdout, otherwise it will be
inferred from the file extension.
Note: the output alignment will most likely not be sorted. You can use samtools sort to sort the output. e.g.,
rasusa aln -c 5 in.bam | samtools sort -o out.bam
Output compression/format
-O, --output-type
reads
Use this option to manually set the compression algoritm to use for the output file(s). It will override any format automatically detected from the output path.
Valid options are:
aln
Use this option to manually set the output file format. By default, the same format as the input will be used, or the
format will be guessed from the --output path extension if given. Valid options are:
borbam: BAMcorcram: CRAMsorsam: SAM
Note: all values to this option are case insensitive.
Compresion level
-l, --compress-level
Compression level to use if compressing the output. By default this is set to the default for the compression type being output.
Target number of bases
-b, --bases
readsonly
Explicitly set the number of bases required in the subsample. This option takes the number in the same format as genome size.
Note: if this option is given, genome size and coverage are not required, or ignored if they are provided.
Number of reads
-n, --num
readsonly
Explicitly set the number of reads in the subsample. This option takes the number in the same format as genome size.
When providing paired reads as input, this option will sample this many total read
pairs. For example, when passing -n 20 r1.fq r2.fq, the two output files will have
20 reads each, and the read ids will be the same in both.
Note: if this option is given, genome size and coverage are not required.
Fraction of reads
-f, --frac
readsonly
Explicitly set the fraction of total reads in the subsample. The value given to this
option can be a float or a percentage - i.e., -f 0.5 and -f 50 will both take half
of the reads.
Note: if this option is given, genome size and coverage are not required.
Random seed
-s, --seed
This option allows you to specify the random seed used by the random subsampler. By explicitly setting this parameter, you make the subsample for the input reproducible. You only need to pass this parameter if you are likely to want to subsample the same input file again in the future and want the same subset of reads. However, if you forget to use this option, the seed generated by the system will be printed to the log output, allowing you to use it in the future.
Verbosity
-v
Adding this optional flag will make the logging more verbose. By default, logging will produce messages considered "info" or above (see here for more details). If verbosity is switched on, you will additionally get "debug" level logging messages.
Full usage
```text $ rasusa --help Randomly subsample reads or alignments
Usage: rasusa [OPTIONS]
Commands: reads Randomly subsample reads aln Randomly subsample alignments to a specified depth of coverage cite Get a bibtex formatted citation for this package help Print this message or the help of the given subcommand(s)
Options: -v Switch on verbosity -h, --help Print help -V, --version Print version ```
reads command
```text $ rasusa reads --help Randomly subsample reads
Usage: rasusa reads [OPTIONS]
Arguments:
For paired Illumina, the order matters. i.e., R1 then R2.
Options: -o, --output
For paired Illumina pass this flag twice `-o o1.fq -o o2.fq`
NOTE: The order of the pairs is assumed to be the same as the input - e.g., R1 then R2. This option is required for paired input.
-g, --genome-size
Alternatively, a FASTA/Q index file can be provided and the genome size will be set to the sum of all reference sequences.
If --bases is not provided, this option and --coverage are required
-c, --coverage
If --bases is not provided, this option and --genome-size are required
-b, --bases
If this option is given, --coverage and --genome-size are ignored
-n, --num
If paired-end reads are passed, this is the number of (matched) reads from EACH file. This option accepts the same format as genome size - e.g., 1k will take 1000 reads
-f, --frac
Values >1 and <=100 will be automatically converted - e.g., 25 => 0.25
-s, --seed
-v Switch on verbosity
-O, --output-type u: uncompressed; b: Bzip2; g: Gzip; l: Lzma; x: Xz (Lzma); z: Zstd
Rasusa will attempt to infer the output compression format automatically from the filename extension. This option is used to override that. If writing to stdout, the default is uncompressed
-l, --compress-level <1-21> Compression level to use if compressing output. Uses the default level for the format if not specified
-h, --help Print help (see a summary with '-h')
-V, --version Print version ```
aln command
```text $ rasusa aln --help Randomly subsample alignments to a specified depth of coverage
Usage: rasusa aln [OPTIONS] --coverage
Arguments:
Options:
-o, --output
The output is not guaranteed to be sorted. We recommend piping the output to `samtools sort`
-O, --output-type
-c, --coverage
-s, --seed
--step-size <INT>
When a region has less than the desired coverage, the step size to move along the chromosome to find more reads.
The lowest of the step and the minimum end coordinate of the reads in the region will be used. This parameter can have a significant impact on the runtime of the subsampling process.
[default: 100]
-h, --help Print help (see a summary with '-h')
-V, --version Print version ```
Benchmark
“Time flies like an arrow; fruit flies like a banana.”
― Anthony G. Oettinger
The real question is: will rasusa just needlessly eat away at your precious time on
earth?
To do this benchmark, I am going to use hyperfine.
The data I used comes from
Note, these benchmarks are for reads only as there is no other tool that replicates the functionality of aln.
Single long read input
Download and rename the fastq
shell
URL="ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR649/008/SRR6490088/SRR6490088_1.fastq.gz"
wget "$URL" -O - | gzip -d -c > tb.fq
The file size is 2.9G, and it has 379,547 reads.
We benchmark against filtlong using the same strategy outlined in
Motivation.
shell
TB_GENOME_SIZE=4411532
COVG=50
TARGET_BASES=$(( TB_GENOME_SIZE * COVG ))
FILTLONG_CMD="filtlong --target_bases $TARGET_BASES tb.fq"
RASUSA_CMD="rasusa reads tb.fq -c $COVG -g $TB_GENOME_SIZE -s 1"
hyperfine --warmup 3 --runs 10 --export-markdown results-single.md \
"$FILTLONG_CMD" "$RASUSA_CMD"
Results
| Command | Mean [s] | Min [s] | Max [s] | Relative |
|:---------------------------------------------|---------------:|--------:|--------:|-------------:|
| filtlong --target_bases 220576600 tb.fq | 21.685 ± 0.055 | 21.622 | 21.787 | 21.77 ± 0.29 |
| rasusa reads tb.fq -c 50 -g 4411532 -s 1 | 0.996 ± 0.013 | 0.983 | 1.023 | 1.00 |
Summary: rasusa ran 21.77 ± 0.29 times faster than filtlong.
Paired-end input
Download and then deinterleave the fastq with pyfastaq
shell
URL="ftp://ftp.sra.ebi.ac.uk/vol1/fastq/SRR648/008/SRR6488968/SRR6488968.fastq.gz"
wget "$URL" -O - | gzip -d -c - | fastaq deinterleave - r1.fq r2.fq
Each file's size is 179M and has 283,590 reads.
For this benchmark, we will use seqtk. We will also test seqtk's 2-pass
mode as this is analogous to rasusa reads.
shell
NUM_READS=140000
SEQTK_CMD_1="seqtk sample -s 1 r1.fq $NUM_READS > /tmp/r1.fq; seqtk sample -s 1 r2.fq $NUM_READS > /tmp/r2.fq;"
SEQTK_CMD_2="seqtk sample -2 -s 1 r1.fq $NUM_READS > /tmp/r1.fq; seqtk sample -2 -s 1 r2.fq $NUM_READS > /tmp/r2.fq;"
RASUSA_CMD="rasusa reads r1.fq r2.fq -n $NUM_READS -s 1 -o /tmp/r1.fq -o /tmp/r2.fq"
hyperfine --warmup 10 --runs 100 --export-markdown results-paired.md \
"$SEQTK_CMD_1" "$SEQTK_CMD_2" "$RASUSA_CMD"
Results
| Command | Mean [ms] | Min [ms] | Max [ms] | Relative |
|:--------------------------------------------------------------------------------------------------|--------------:|---------:|---------:|------------:|
| seqtk sample -s 1 r1.fq 140000 > /tmp/r1.fq; seqtk sample -s 1 r2.fq 140000 > /tmp/r2.fq; | 907.7 ± 23.6 | 875.4 | 997.8 | 1.84 ± 0.62 |
| seqtk sample -2 -s 1 r1.fq 140000 > /tmp/r1.fq; seqtk sample -2 -s 1 r2.fq 140000 > /tmp/r2.fq; | 870.8 ± 54.9 | 818.2 | 1219.8 | 1.77 ± 0.61 |
| rasusa reads r1.fq r2.fq -n 140000 -s 1 -o /tmp/r1.fq -o /tmp/r2.fq | 492.2 ± 165.4 | 327.4 | 887.4 | 1.00 |
Summary: rasusa reads ran 1.84 times faster than seqtk (1-pass) and 1.77 times faster
than seqtk (2-pass)
So, rasusa reads is faster than seqtk but doesn't require a fixed number of reads -
allowing you to avoid doing maths to determine how many reads you need to downsample to
a specific coverage. 🤓
Contributing
If you would like to help improve rasusa you are very welcome!
For changes to be accepted, they must pass the CI and coverage checks. These include:
- Code is formatted with
rustfmt. This can be done by runningcargo fmtin the project directory. - There are no compiler errors/warnings. You can check this by running
cargo clippy --all-features --all-targets -- -D warnings - Code coverage has not reduced. If you want to check coverage before pushing changes, I
use
kcov.
Citing
If you use rasusa in your research, it would be very much appreciated if you could
cite it.
Hall, M. B., (2022). Rasusa: Randomly subsample sequencing reads to a specified coverage. Journal of Open Source Software, 7(69), 3941, https://doi.org/10.21105/joss.03941
Bibtex
You can get the following citation by running rasusa cite
```Bibtex @article{Hall2022, doi = {10.21105/joss.03941}, url = {https://doi.org/10.21105/joss.03941}, year = {2022}, publisher = {The Open Journal}, volume = {7}, number = {69}, pages = {3941}, author = {Michael B. Hall}, title = {Rasusa: Randomly subsample sequencing reads to a specified coverage}, journal = {Journal of Open Source Software} }
```
Owner
- Name: Michael Hall
- Login: mbhall88
- Kind: user
- Location: Sunshine Coast, Australia
- Company: University of Queensland | UQCCR
- Website: https://mbh.sh
- Repositories: 130
- Profile: https://github.com/mbhall88
Postdoc @ University of Queensland with @LeahRoberts Bioinformatics | Nanopore | Microbial Genomics | Software Dev.
JOSS Publication
Rasusa: Randomly subsample sequencing reads to a specified coverage
Authors
Tags
bioinformatics genomics fastq fasta subsampling randomGitHub Events
Total
- Create event: 2
- Release event: 1
- Issues event: 7
- Watch event: 29
- Issue comment event: 27
- Push event: 6
- Pull request review event: 3
- Pull request review comment event: 2
- Pull request event: 4
- Fork event: 2
Last Year
- Create event: 2
- Release event: 1
- Issues event: 7
- Watch event: 29
- Issue comment event: 27
- Push event: 6
- Pull request review event: 3
- Pull request review comment event: 2
- Pull request event: 4
- Fork event: 2
Committers
Last synced: 5 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Michael Hall | m****l@m****h | 188 |
| Michael Hall | m****8@g****m | 46 |
| dependabot[bot] | 4****] | 2 |
| Pierre Marijon | p****t@a****r | 1 |
| George Bouras | 8****3 | 1 |
| Andrew Kane | a****w@a****g | 1 |
| Pierre Marijon | p****n@h****e | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 4 months ago
All Time
- Total issues: 42
- Total pull requests: 20
- Average time to close issues: 5 months
- Average time to close pull requests: about 7 hours
- Total issue authors: 23
- Total pull request authors: 5
- Average comments per issue: 2.69
- Average comments per pull request: 0.5
- Merged pull requests: 20
- Bot issues: 0
- Bot pull requests: 2
Past Year
- Issues: 3
- Pull requests: 4
- Average time to close issues: 7 days
- Average time to close pull requests: about 11 hours
- Issue authors: 3
- Pull request authors: 3
- Average comments per issue: 6.0
- Average comments per pull request: 0.5
- Merged pull requests: 4
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- mbhall88 (16)
- tseemann (3)
- natir (2)
- npdungca (2)
- camilogarciabotero (1)
- ilyavs (1)
- nbenzakour (1)
- k3yavi (1)
- nirajmehta96 (1)
- MostafaYA (1)
- pmielle (1)
- AnupamGautam (1)
- ankane (1)
- genignored (1)
- IndreNav (1)
Pull Request Authors
- mbhall88 (23)
- ankane (2)
- gbouras13 (2)
- dependabot[bot] (2)
- natir (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- cargo 16,708 total
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 15
- Total maintainers: 1
crates.io: rasusa
Randomly subsample reads or alignments
- Homepage: https://github.com/mbhall88/rasusa
- Documentation: https://docs.rs/rasusa/
- License: non-standard
-
Latest release: 2.1.1
published 6 months ago
Rankings
Maintainers (1)
Dependencies
- adler 1.0.2
- aho-corasick 0.7.18
- anyhow 1.0.58
- assert_cmd 0.10.2
- atty 0.2.14
- autocfg 1.1.0
- bgzip 0.2.1
- bitflags 1.3.2
- buf_redux 0.8.4
- bytecount 0.6.3
- bzip2 0.4.3
- bzip2-sys 0.1.11+1.0.8
- cc 1.0.73
- cfg-if 1.0.0
- chrono 0.4.19
- clap 3.2.8
- clap_derive 3.2.7
- clap_lex 0.2.4
- colored 1.9.3
- crc32fast 1.3.2
- difference 2.0.0
- escargot 0.3.1
- fastrand 1.7.0
- fern 0.5.9
- flate2 1.0.24
- float-cmp 0.8.0
- getrandom 0.1.16
- hashbrown 0.12.1
- heck 0.4.0
- hermit-abi 0.1.19
- indexmap 1.9.1
- instant 0.1.12
- itoa 1.0.2
- jobserver 0.1.24
- lazy_static 1.4.0
- libc 0.2.126
- log 0.4.17
- lzma-sys 0.1.19
- memchr 2.5.0
- miniz_oxide 0.5.3
- needletail 0.4.1
- niffler 2.4.0
- normalize-line-endings 0.3.0
- num-integer 0.1.45
- num-traits 0.2.15
- once_cell 1.12.1
- os_str_bytes 6.1.0
- pkg-config 0.3.25
- ppv-lite86 0.2.16
- predicates 1.0.8
- predicates-core 1.0.3
- predicates-tree 1.0.5
- proc-macro-error 1.0.4
- proc-macro-error-attr 1.0.4
- proc-macro2 1.0.40
- quote 1.0.20
- rand 0.7.3
- rand_chacha 0.2.2
- rand_core 0.5.1
- rand_hc 0.2.0
- rand_pcg 0.2.1
- redox_syscall 0.2.13
- regex 1.5.6
- regex-syntax 0.6.26
- remove_dir_all 0.5.3
- ryu 1.0.10
- safemem 0.3.3
- serde 1.0.138
- serde_derive 1.0.138
- serde_json 1.0.82
- strsim 0.10.0
- syn 1.0.98
- tempfile 3.3.0
- termcolor 1.1.3
- termtree 0.2.4
- textwrap 0.15.0
- thiserror 1.0.31
- thiserror-impl 1.0.31
- time 0.1.44
- unicode-ident 1.0.1
- version_check 0.9.4
- wasi 0.9.0+wasi-snapshot-preview1
- wasi 0.10.0+wasi-snapshot-preview1
- winapi 0.3.9
- winapi-i686-pc-windows-gnu 0.4.0
- winapi-util 0.1.5
- winapi-x86_64-pc-windows-gnu 0.4.0
- xz2 0.1.7
- zstd 0.7.0+zstd.1.4.9
- zstd-safe 3.1.0+zstd.1.4.9
- zstd-sys 1.5.0+zstd.1.4.9
- assert_cmd 0.10 development
- predicates 1 development
- tempfile 3.1.0 development
- anyhow 1.0
- chrono 0.4
- clap 3.2.8
- fern 0.5
- log 0.4
- needletail 0.4.1
- niffler 2.3
- rand 0.7.2
- rand_pcg 0.2.1
- regex 1.5.5
- thiserror 1.0
- actions-rs/cargo v1 composite
- actions-rs/toolchain v1 composite
- actions/checkout v2 composite
- actions/upload-artifact master composite
- softprops/action-gh-release 59c3b4891632ff9a897f99a91d7bc557467a3a22 composite
- actions-rs/cargo v1 composite
- actions-rs/tarpaulin v0.1 composite
- actions-rs/toolchain v1 composite
- actions/cache v2 composite
- actions/checkout v2 composite
- actions/upload-artifact v1 composite
- codecov/codecov-action v1 composite
- bash 5.1 build
- rust 1.62 build
