variantbenchmarking
Pipeline to evaluate and validate the accuracy of variant calling methods in genomic research
Science Score: 57.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 10 DOI reference(s) in README -
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (10.1%) to scientific vocabulary
Keywords
Repository
Pipeline to evaluate and validate the accuracy of variant calling methods in genomic research
Basic Info
- Host: GitHub
- Owner: nf-core
- License: mit
- Language: Nextflow
- Default Branch: master
- Homepage: https://nf-co.re/variantbenchmarking
- Size: 21.7 MB
Statistics
- Stars: 34
- Watchers: 174
- Forks: 16
- Open Issues: 23
- Releases: 4
Topics
Metadata Files
README.md
Introduction
nf-core/variantbenchmarking is designed to evaluate and validate the accuracy of variant calling methods in genomic research. Initially, the pipeline is tuned well for available gold standard truth sets (for example, Genome in a Bottle and SEQC2 samples) but it can be used to compare any two variant calling results. The workflow provides benchmarking tools for small variants including SNVs and INDELs, Structural Variants (SVs) and Copy Number Variations (CNVs) for germline and somatic analysis.
The pipeline is built using Nextflow, a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it much easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!
The workflow involves several key processes to ensure reliable and reproducible results as follows:
Standardization and normalization of variants:
This initial step ensures consistent formatting and alignment of variants in test and truth VCF files for accurate comparison.
- Subsample if input test vcf is multisample (bcftools view)
- Homogenization of multi-allelic variants, MNPs and SVs (including imprecise paired breakends and single breakends) (variant-extractor)
- Reformatting test VCF files from different SV callers (svync)
- Rename sample names in test and truth VCF files (bcftools reheader)
- Splitting multi-allelic variants in test and truth VCF files (bcftools norm)
- Deduplication of variants in test and truth VCF files (bcftools norm)
- Left aligning of variants in test and truth VCF files (bcftools norm)
- Use prepy in order to normalize test files. This option is only applicable for happy benchmarking of germline analysis (prepy)
- Split SNVs and indels if the given test VCF contains both. This is only applicable for somatic analysis (bcftools view)
Filtering options:
Applying filtering on the process of benchmarking itself might makes it impossible to compare different benchmarking strategies. Therefore, for whom like to compare benchmarking methods this subworkflow aims to provide filtering options for variants.
- Filtration of contigs (bcftools view)
- Include or exclude SNVs and INDELs (bcftools filter)
- Size and quality filtering for SVs (SURVIVOR filter)
Liftover of vcfs:
This sub-workflow provides option to convert genome coordinates of truth VCF and test VCFs and high confidence BED file to a new assembly. Golden standard truth files are build upon specific reference genomes which makes the necessity of lifting over depending on the test VCF in query. Lifting over one or more test VCFs is also possible.
- Create sequence dictionary for the reference (picard CreateSequenceDictionary). This file can be saved and reused.
- Lifting over VCFs (picard LiftoverVcf)
- Lifting over high confidence coordinates (UCSC liftover)
Statistical inference of input test and truth variants:
This step provides insights into the distribution of variants before benchmarking by extracting variant statistics:.
- SNVs, INDELs and complex variants (bcftools stats)
- SVs by type (SURVIVOR stats)
Benchmarking of variants:
Actual benchmarking of variants are split between SVs and small variants:
Available methods for germline and somatic structural variant (SV) benchmarking are:
- Truvari (truvari bench)
- SVanalyzer (svanalyzer benchmark)
- Rtgtools (only for BND) (rtg bndeval)
[!NOTE] Please note that there is no somatic specific tool for SV benchmarking in this pipeline.
Available methods for germline and somatic CNVs (copy number variations) are:
- Truvari (truvari bench)
- Wittyer (witty.er)
- Intersection (bedtools intersect)
[!NOTE] Please note that there is no somatic specific tool for CNV benchmarking in this pipeline.
Available methods for small variants: SNVs and INDELs:
- Germline variant benchmarking using (rtg vcfeval)
- Germline variant benchmarking using (hap.py)
- Somatic variant benchmarking using (rtg vcfeval --squash-ploidy)
- Somatic variant benchmarking using (som.py)
[!NOTE] Please note that using happ.py and som.py with rtgtools as comparison engine is also possible. Check conf/tests/test_ga4gh.config as an example.
Intersection of benchmark regions:
Intersecting test and truth BED regions produces benchmark metrics. Intersection analysis is especially recommended for CNV benchmarking where result reports may variate per tool.
- Convert SV or CNV VCF file to BED file, if no regions file is provided for test case using (SVTK vcf2bed)
- Convert VCF file to BED file, if no regions file is provided for test case using (Bedops convert2bed)
- Intersect the regions and gether benchmarking statistics using (bedtools intersect)
Comparison of benchmarking results per TP, FP and FN files
It is essential to compare benchmarking results in order to infer uniquely or commonly seen TPs, FPs and FNs.
- Merging TP, FP and FN results for happy, rtgtools and sompy (bcftools merge)
- Merging TP, FP and FN results for Truvari and SVanalyzer (SURVIVOR merge)
- Conversion of VCF files to CSV to infer common and unique variants per caller (python script)
Reporting of benchmark results
The generation of comprehensive report that consolidates all benchmarking results.
- Merging summary statistics per benchmarking tool (python script)
- Plotting benchmark metrics per benchmarking tool (R script)
- Create visual HTML report for the integration of NCBENCH (datavzrd)
- Apply MultiQC to visualize results
Usage
[!NOTE] If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with
-profile testbefore running the workflow on actual data.
First, prepare a samplesheet with your input data that looks as follows:
samplesheet.csv:
csv
id,test_vcf,caller
test1,test1.vcf.gz,delly
test2,test2.vcf,gatk
test3,test3.vcf.gz,cnvkit
Each row represents a vcf file (test-query file). For each vcf file and variant calling method (caller) have to be defined.
User has to provide truthvcf and truthid in config files.
[!NOTE] There are publicly available truth sources. For germline analysis, it is common to use genome in a bottle (GiAB) variants. There are variate type of golden truths and high confidence regions for hg37 and hg38 references. Please select and use carefully. For somatic analysis, SEQC2 project released SNV, INDEL and CNV regions. One, can select and use those files.
Here you can find example combinations of truth files
For more details and further functionality, please refer to the usage documentation and the parameter documentation.
Now, you can run the pipeline using:
bash
nextflow run nf-core/variantbenchmarking \
-profile <docker/singularity/.../institute> \
--input samplesheet.csv \
--outdir <OUTDIR> \
--genome GRCh37 \
--analysis germline \
--truth_id HG002 \
--truth_vcf truth.vcf.gz
[!WARNING] Please provide pipeline parameters via the CLI or Nextflow
-params-fileoption. Custom config files including those provided by the-cNextflow option can be used to provide any configuration except for parameters; see docs. Conda profile is not available for SVanalyzer (SVBenchmark) tool, if you are planing to use the tool either choose docker or singularity.
Example usages
This pipeline enables quite a number of subworkflows suitable for different benchmarking senarios. Please go through this documentation to learn some example usages which discusses about the test config files under conf/tests and tests/.
Pipeline output
To see the results of an example test run with a full size dataset refer to the results tab on the nf-core website pipeline page. For more details about the output files and reports, please refer to the output documentation.
This pipeline outputs benchmarking results per method besides to the inferred and compared statistics.
Credits
nf-core/variantbenchmarking was originally written by Kübra Narcı (@kubranarci) as a part of benchmarking studies in German Human Genome Phenome Archieve Project (GHGA).
We thank the following people for their extensive assistance in the development of this pipeline:
- Nicolas Vannieuwkerke (@nvnienwk),
- Maxime Garcia (@maxulysse),
- Sameesh Kher (@khersameesh24)
- Florian Heyl (@heylf)
- Krešimir Beštak (@kbestak)
- Elad Herz (@EladH1)
Acknowledgements
Contributions and Support
If you would like to contribute to this pipeline, please see the contributing guidelines.
For further information or help, don't hesitate to get in touch on the Slack #variantbenchmarking channel (you can join with this invite).
Citations
If you use nf-core/variantbenchmarking for your analysis, please cite it using the following doi: 110.5281/zenodo.14916661
An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.
You can cite the nf-core publication as follows:
The nf-core framework for community-curated bioinformatics pipelines.
Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.
Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.
Owner
- Name: nf-core
- Login: nf-core
- Kind: organization
- Email: core@nf-co.re
- Website: http://nf-co.re
- Twitter: nf_core
- Repositories: 84
- Profile: https://github.com/nf-core
A community effort to collect a curated set of analysis pipelines built using Nextflow.
Citation (CITATIONS.md)
# nf-core/variantbenchmarking: Citations ## [nf-core](https://pubmed.ncbi.nlm.nih.gov/32055031/) > Ewels PA, Peltzer A, Fillinger S, Patel H, Alneberg J, Wilm A, Garcia MU, Di Tommaso P, Nahnsen S. The nf-core framework for community-curated bioinformatics pipelines. Nat Biotechnol. 2020 Mar;38(3):276-278. doi: 10.1038/s41587-020-0439-x. PubMed PMID: 32055031. ## [Nextflow](https://pubmed.ncbi.nlm.nih.gov/28398311/) > Di Tommaso P, Chatzou M, Floden EW, Barja PP, Palumbo E, Notredame C. Nextflow enables reproducible computational workflows. Nat Biotechnol. 2017 Apr 11;35(4):316-319. doi: 10.1038/nbt.3820. PubMed PMID: 28398311. ## [nf-test](https://doi.org/10.1101/2024.05.25.595877) > Forer, L., & Schönherr, S. (2024). Improving the Reliability and Quality of Nextflow Pipelines with nf-test. bioRxiv. https://doi.org/10.1101/2024.05.25.595877 ## [nf-prov](https://github.com/nextflow-io/nf-prov) ## [nf-schema](https://nextflow-io.github.io/nf-schema/latest/) ## Pipeline tools - [Bcftools](http://samtools.github.io/bcftools/bcftools.html) > Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, Marth G, Abecasis G, Durbin R; 1000 Genome Project Data Processing Subgroup. The Sequence Alignment/Map format and SAMtools. Bioinformatics. 2009 Aug 15;25(16):2078-9. doi: 10.1093/bioinformatics/btp352. Epub 2009 Jun 8. PMID: 19505943; PMCID: PMC2723002. - [BEDTools](https://bedtools.readthedocs.io/) > Aaron R. Quinlan, Ira M. Hall, BEDTools: a flexible suite of utilities for comparing genomic features, Bioinformatics, Volume 26, Issue 6, March 2010, Pages 841–842, https://doi.org/10.1093/bioinformatics/btq033 - [bedops](https://bedops.readthedocs.io/en/latest/) > Shane Neph, M. Scott Kuehn, Alex P. Reynolds, Eric Haugen, Robert E. Thurman, Audra K. Johnson, Eric Rynes, Matthew T. Maurano, Jeff Vierstra, Sean Thomas, Richard Sandstrom, Richard Humbert, John A. Stamatoyannopoulos, BEDOPS: high-performance genomic feature operations, Bioinformatics, Volume 28, Issue 14, July 2012, Pages 1919–1920, https://doi.org/10.1093/bioinformatics/bts277 - [datavzrd](https://datavzrd.github.io/docs/index.html) - [hap.py](https://www.illumina.com/products/by-type/informatics-products/basespace-sequence-hub/apps/hap-py-benchmarking.html) - [manta](https://github.com/Illumina/manta/blob/v1.6.0/docs/userGuide/README.md) > Xiaoyu Chen, Ole Schulz-Trieglaff, Richard Shaw, Bret Barnes, Felix Schlesinger, Morten Källberg, Anthony J. Cox, Semyon Kruglyak, Christopher T. Saunders, Manta: rapid detection of structural variants and indels for germline and cancer sequencing applications, Bioinformatics, Volume 32, Issue 8, April 2016, Pages 1220–1222, https://doi.org/10.1093/bioinformatics/btv710 - [MultiQC](https://pubmed.ncbi.nlm.nih.gov/27312411/) > Ewels P, Magnusson M, Lundin S, Käller M. MultiQC: summarize analysis results for multiple tools and samples in a single report. Bioinformatics. 2016 Oct 1;32(19):3047-8. doi: 10.1093/bioinformatics/btw354. Epub 2016 Jun 16. PubMed PMID: 27312411; PubMed Central PMCID: PMC5039924. - [picard](https://gatk.broadinstitute.org/hc/en-us/articles/360036712531-CreateSequenceDictionary-Picard) - [RTG Tools](https://www.realtimegenomics.com/products/rtg-tools) > John G. Cleary, Ross Braithwaite, Kurt Gaastra, Brian S. Hilbush, Stuart Inglis, Sean A. Irvine, Alan Jackson, Richard Littin, Mehul Rathod, David Ware, Justin M. Zook, Len Trigg, Francisco M. De La Vega bioRxiv 023754; doi: https://doi.org/10.1101/023754 - [SURVIVOR](https://github.com/fritzsedlazeck/SURVIVOR/wiki) > Jeffares, D., Jolly, C., Hoti, M. et al. Transient structural variations have strong effects on quantitative traits and reproductive isolation in fission yeast. Nat Commun 8, 14061 (2017). https://doi.org/10.1038/ncomms14061 - [SVanalyzer](https://svanalyzer.readthedocs.io/en/latest/index.html) - [svtk](https://github.com/broadinstitute/gatk-sv/tree/master/src/svtk) - [svync](https://github.com/nvnieuwk/svync) - [tabix](https://www.htslib.org/doc/tabix.html) > Heng Li, Tabix: fast retrieval of sequence features from generic TAB-delimited files, Bioinformatics, Volume 27, Issue 5, March 2011, Pages 718–719, https://doi.org/10.1093/bioinformatics/btq671 - [truvari](https://github.com/ACEnglish/truvari) > English, A.C., Menon, V.K., Gibbs, R.A. et al. Truvari: refined structural variant comparison preserves allelic diversity. Genome Biol 23, 271 (2022). https://doi.org/10.1186/s13059-022-02840-6 - [UCSC](http://hgdownload.cse.ucsc.edu/admin/exe) > Hinrichs AS, Karolchik D, Baertsch R, Barber GP, Bejerano G, Clawson H, Diekhans M, Furey TS, Harte RA, Hsu F et al. The UCSC Genome Browser Database: update 2006. Nucleic Acids Res. 2006 Jan 1;34(Database issue):D590-8. - [witty.er](https://github.com/Illumina/witty.er) ## R packages - [ggplot2](https://ggplot2.tidyverse.org/) > Wickham H (2016). ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York. ISBN 978-3-319-24277-4, https://ggplot2.tidyverse.org. - [reshape2](https://cran.r-project.org/web/packages/reshape2/index.html) > Wickham H (2007). “Reshaping Data with the reshape Package.” Journal of Statistical Software, 21(12), 1–20. http://www.jstatsoft.org/v21/i12/. ## Python packages - [pysam](https://pandas.pydata.org/) ## Software packaging/containerisation tools - [Anaconda](https://anaconda.com) > Anaconda Software Distribution. Computer software. Vers. 2-2.4.0. Anaconda, Nov. 2016. Web. - [Bioconda](https://pubmed.ncbi.nlm.nih.gov/29967506/) > Grüning B, Dale R, Sjödin A, Chapman BA, Rowe J, Tomkins-Tinch CH, Valieris R, Köster J; Bioconda Team. Bioconda: sustainable and comprehensive software distribution for the life sciences. Nat Methods. 2018 Jul;15(7):475-476. doi: 10.1038/s41592-018-0046-7. PubMed PMID: 29967506. - [BioContainers](https://pubmed.ncbi.nlm.nih.gov/28379341/) > da Veiga Leprevost F, Grüning B, Aflitos SA, Röst HL, Uszkoreit J, Barsnes H, Vaudel M, Moreno P, Gatto L, Weber J, Bai M, Jimenez RC, Sachsenberg T, Pfeuffer J, Alvarez RV, Griss J, Nesvizhskii AI, Perez-Riverol Y. BioContainers: an open-source and community-driven framework for software standardization. Bioinformatics. 2017 Aug 15;33(16):2580-2582. doi: 10.1093/bioinformatics/btx192. PubMed PMID: 28379341; PubMed Central PMCID: PMC5870671. - [Docker](https://dl.acm.org/doi/10.5555/2600239.2600241) > Merkel, D. (2014). Docker: lightweight linux containers for consistent development and deployment. Linux Journal, 2014(239), 2. doi: 10.5555/2600239.2600241. - [Singularity](https://pubmed.ncbi.nlm.nih.gov/28494014/) > Kurtzer GM, Sochat V, Bauer MW. Singularity: Scientific containers for mobility of compute. PLoS One. 2017 May 11;12(5):e0177459. doi: 10.1371/journal.pone.0177459. eCollection 2017. PubMed PMID: 28494014; PubMed Central PMCID: PMC5426675.
GitHub Events
Total
- Create event: 52
- Release event: 3
- Issues event: 95
- Watch event: 19
- Delete event: 50
- Issue comment event: 63
- Push event: 230
- Pull request review comment event: 96
- Pull request event: 109
- Pull request review event: 148
- Fork event: 10
Last Year
- Create event: 52
- Release event: 3
- Issues event: 95
- Watch event: 19
- Delete event: 50
- Issue comment event: 63
- Push event: 230
- Pull request review comment event: 96
- Pull request event: 109
- Pull request review event: 148
- Fork event: 10
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 38
- Total pull requests: 48
- Average time to close issues: about 1 month
- Average time to close pull requests: 11 days
- Total issue authors: 4
- Total pull request authors: 6
- Average comments per issue: 0.29
- Average comments per pull request: 0.4
- Merged pull requests: 34
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 37
- Pull requests: 47
- Average time to close issues: about 1 month
- Average time to close pull requests: 4 days
- Issue authors: 4
- Pull request authors: 6
- Average comments per issue: 0.3
- Average comments per pull request: 0.4
- Merged pull requests: 34
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- kubranarci (85)
- EladH1 (2)
- fellen31 (2)
- famosab (1)
- kjellinjonas (1)
- nvnieuwk (1)
- fethalen (1)
Pull Request Authors
- kubranarci (74)
- nf-core-bot (13)
- nvnieuwk (7)
- maxulysse (3)
- famosab (1)
- Elad-herz (1)
- EladH1 (1)
- AtaJadidAhari (1)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- actions/upload-artifact v4 composite
- seqeralabs/action-tower-launch v2 composite
- actions/upload-artifact v4 composite
- seqeralabs/action-tower-launch v2 composite
- mshick/add-pr-comment v2 composite
- actions/checkout v4 composite
- nf-core/setup-nextflow v1 composite
- actions/stale v9 composite
- actions/setup-python v5 composite
- eWaterCycle/setup-singularity v7 composite
- nf-core/setup-nextflow v1 composite
- actions/checkout b4ffde65f46336ab88eb53be808477a3936bae11 composite
- actions/setup-python 0a5c61591373683505ea898e09a3ea4f39ef2b9c composite
- peter-evans/create-or-update-comment 71345be0265236311c031f5c7866368bd1eff043 composite
- actions/checkout v4 composite
- actions/setup-python v5 composite
- actions/upload-artifact v4 composite
- nf-core/setup-nextflow v1 composite
- dawidd6/action-download-artifact v3 composite
- marocchino/sticky-pull-request-comment v2 composite
- actions/setup-python v5 composite
- rzr/fediverse-action master composite
- zentered/bluesky-post-action v0.1.0 composite
- bcftools 1.18.*
- bcftools 1.18.*
- bcftools 1.18.*
- multiqc 1.21.*
- survivor 1.0.7.*
- survivor 1.0.7.*
- svanalyzer 0.36.*
- svync 0.1.2.*
- htslib 1.19.1.*
- tabix 1.11.*
- htslib 1.19.1.*
- tabix 1.11.*
- htslib 1.19.1.*
- tabix 1.11.*
- truvari 4.1.0.*