Science Score: 57.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 11 DOI reference(s) in README -
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (7.8%) to scientific vocabulary
Repository
Basic Info
- Host: GitHub
- Owner: mskilab-org
- License: mit
- Language: R
- Default Branch: master
- Size: 2.66 MB
Statistics
- Stars: 13
- Watchers: 1
- Forks: 3
- Open Issues: 6
- Releases: 0
Metadata Files
README.md
⚠️ This repo is deprecated! Please use our new nf-gOS pipeline via our gosh CLI tool ⚠️
NF-JaBbA (Nextflow - Junction Balance Analysis Pipeline)
▐▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▌
▐ ▌
▐ ██████ █████ ███████████ █████ █████████ ▌
▐ ███░░███ ░░███ ░░███░░░░░███░░███ ███░░░░░███ ▌
▐ ████████ ░███ ░░░ ░███ ██████ ░███ ░███ ░███████ ░███ ░███ ▌
▐ ░░███░░███ ███████ ██████████ ░███ ░░░░░███ ░██████████ ░███░░███ ░███████████ ▌
▐ ░███ ░███ ░░░███░ ░░░░░░░░░░ ░███ ███████ ░███░░░░░███ ░███ ░███ ░███░░░░░███ ▌
▐ ░███ ░███ ░███ ███ ░███ ███░░███ ░███ ░███ ░███ ░███ ░███ ░███ ▌
▐ ████ █████ █████ ░░████████ ░░████████ ███████████ ████████ █████ █████ ▌
▐ ░░░░ ░░░░░ ░░░░░ ░░░░░░░░ ░░░░░░░░ ░░░░░░░░░░░ ░░░░░░░░ ░░░░░ ░░░░░ ▌
▐ ▌
▐▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▌
Citations
An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.
This pipeline uses code and infrastructure developed and maintained by the nf-core community, reused here under the MIT license.
Most large structural variants in cancer genomes can be detected without long reads. Choo, ZN., Behr, J.M., Deshpande, A. et al.
Nat Genet 2023 Nov 09. doi: https://doi.org/10.1038/s41588-023-01540-6
The nf-core framework for community-curated bioinformatics pipelines.
Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.
Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.
Introduction
mskilab-org/nf-JaBbA is a new state-of-the-art bioinformatics pipeline from mskilab-org for running JaBbA, our algorithm for doing MIP based joint inference of copy number and rearrangement state in cancer whole genome sequence data. This pipeline runs all the pre-requisite modules and generates the necessary inputs for running JaBbA. It is designed to take tumor-normal pairs of human samples as input.
We took inspiration from nf-core/Sarek, a workflow for detecting variants in whole genome or targeted sequencing data. nf-jabba is built using Nextflow and the Nextflow DSL2. All the modules use Docker and Singularity containers, for easy execution and reproducibility. Some of the modules/processes are derived from open source nf-core/modules.
This pipeline has been designed to start from FASTQ files or directly from BAM files. Paths to these files should be supplied in a CSV file (please refer to the section below for the input format of the .csv file).
Workflow Summary:
- Alignment to Reference Genome (currently supports
BWA-MEM&BWA-MEM2; a modified version of theAlignmentstep fromnf-core/Sarekis used here). ) - Quality Control (using
FastQC) - Trimming (must turn on using
--trim_fastq) (usingfastp) - Marking Duplicates (using
GATK MarkDuplicates) - Base recalibration (using
GATK BaseRecalibrator) - Applying BQSR (using
GATK ApplyBQSR) - Performing structural variant calling (using
SVABAand/orGRIDSS; must mention using--tools) - Perform pileups (using mskilab's custom
HetPileupsmodule; must mention using--tools) - Generate raw coverages and correct for GC & Mappability bias (using
fragCounter; must mention using--tools) - Remove biological and technical noise from coverage data. (using
Dryclean; must mention using--tools) - Perform segmentation using tumor/normal ratios of corrected read counts, (using the
CBS(circular binary segmentation) algorithm; must mention using--tools) - Purity & ploidy estimation (currently supports
ASCATto pass ploidy values to JaBbA; must mention using--tools) - Execute JaBbA (using inputs from
Dryclean,CBS,HetPileupsand/orASCAT; must mention using--tools)
Usage
Note If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with
-profile testbefore running the workflow on actual data.
Setting up the samplesheet.csv file for input:
You need to create a samplesheet with information regarding the samples you want to run the pipeline on. You need to specify the path of your samplesheet using the --input flag to specify the location. Make sure the input file is a comma-separated file and contains the headers discussed below. It is highly recommended to provide the *absolute path** for inputs inside the samplesheet rather than relative paths.*
To mention a sample as paired tumor-normal, it has to be specified with the same patient ID, a different sample, and their respective status. A 1 in the status field indicates a tumor sample, while a 0 indicates a normal sample. If there are multiple sample IDs, nf-jabba will consider them as separate samples and output the results in separate folders based on the patient attribute. All the runs will be separated by patient, to ensure that there is no mixing of outputs.
You need to specify the desired output root directory using --outdir flag. The outputs will then be stored in your designated folder, organized by tool and sample.
To run the pipeline from the beginning, first create an --input sampleSheet.csv file with your file paths. A typical input whould look like this:
csv
patient,sex,status,sample,lane,fastq_1,fastq_2
TCXX49,XX,0,TCXX49_N,lane_1,/path/to/fastq_1.fq.gz,/path/to/fastq_2.gz
Each row represents a pair of fastq files (paired end) for each sample.
After the input file is ready, you can run the pipeline using:
bash
nextflow run mskilab-org/nf-jabba \
-profile <docker/singularity/.../institute> \
--input samplesheet.csv \
--outdir <OUTDIR> \
--tools <svaba,fragcounter,dryclean,cbs,hetpileups,ascat,jabba> \
--genome <GATK.GRCh37/GATK.GRCh38>
Warning: Please provide pipeline parameters via the CLI or Nextflow
-params-fileoption. Custom config files including those provided by the-cNextflow option can be used to provide any configuration except for parameters; see docs.
Discussion of expected fields in input file and expected inputs for each --step
A typical sample sheet should populate with the column names as shown below:
| Column Name | Description |
|-----------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|
| patient | Patient or Sample ID. This should differentiate each patient/sample. Note: Each patient can have multiple sample names. |
| sample | Sample ID for each Patient. Should differentiate between tumor and normal. Sample IDs should be unique to Patient IDs |
| lane | If starting with FASTQ files, and if there are multiple lanes for each sample for each patient, mention lane name. Required for --step alignment. |
| sex | If known, please provide the sex for the patient. For instance if **Male type XY, else if Female type XX, otherwise put NA. |
| status | This should indicate if your sample is tumor or normal. For normal, write 0, and for tumor, write 1. |
| fastq1 | Full Path to FASTQ file read 1. The extension should be .fastq.gz or .fq.gz. Required for --step alignment. |
| fastq2 | Full Path to FASTQ file read 2. The extension should be .fastq.gz or .fq.gz. Required for --step alignment. |
| bam | Full Path to BAM file. The extension should be .bam. Required for --step sv_calling. |
| bai | Full Path to BAM index file. The extension should be .bam.bai. Required for --step sv_calling. |
| cram | Full Path to CRAM file. The extension should be .cram. Required for --step sv_calling if file is of type CRAM. |
| crai | Full Path to CRAM index file. The extension should be .cram.crai. Required for --step sv_calling if file is of type CRAM. |
| table | Full path to Recalibration table file. Required for --step recalibrate. |
| vcf | Full path to VCF file. Required for --step jabba. |
| hets | Full path to HetPileups .txt file. Required for --step jabba. |
For more information regarding the pipeline usage and the inputs necesaary for each step, please follow the Usage documentation.
Helpful Core Nextflow Commands:
-resume
If a process of the pipeline fails or is interrupted at some point, Nextflow can resume from that point without having to start over from the beginning. You must specify this in the CLI or on the command-line when restarting a pipeline. You can also supply a run name to resume a specific run using: -resume [run-name]. Use the nextflow log command to show previous run names.
-profile
Use this parameter for choosing a configuration profile. Profiles contain configuration presets for different computing environments.
Several generic profiles have been provided by default which instruct the pipeline to use software packaged using different methods. You can use this option to run the pipeline via containers (singularity/Docker) (highly recommended)
-c
You can mention custom configuration scripts to run the pipeline with using the -c flag and providing a path to the .config file. This is advised when you want to submit processes into an executor like slurm or LSF.
-bg
The Nextflow -bg flag launches the Nextflow pipeline as a background process. This allows you to detach or exit your terminal without interrupting the run. A log of the run will be saved inside a file upon completion. You can also use screen or tmux sessions to persist runs.
Containers:
Every module in the pipeline has been containerized. Some modules are partially modified versions of nf-core/modules, these modules use nf-core containers. Modules that use our lab packages and scripts were containerized into Docker images. These images can be found on our DockerHub.
Warning: JaBbA depends on CPLEX MIP Optimizer to work. Because CPLEX is a proprietary software, it isn't included in the image and needs to be installed by the user. To add CPLEX: 1. Download CPLEX (Linux x86-64). (You may need to use the HTTP method.) 2. Pull image and run the container using:
docker pull mskilab/jabba:latest docker run -it --rm --platform linux/amd64 mskilab/jabba:latest3. Copy CPLEX binary into the container: docker cp /PATH/TO/DOWNLOADEDCPLEX.bin CONTAINERID:/opt/cplexstudio 4. Install CPLEX: /opt/cplexstudio/DOWNLOADEDCPLEX.bin (If you get a Permission denied error, run chmod 777 /PATH/TO/DOWNLOADEDCPLEX.bin before copying it into the container.) 5. When prompted for an installation path, type /opt/cplex. This is what the CPLEXDIR environmental variable is set to. 6. Save changes to a new image for future use: Exit container (type exit or press Ctrl-D) Run docker commit CONTAINERID NEWIMAGEID
Debugging any step/process:
To debug any step or process that failed, first check your current execution_trace*.txt file inside the <outdir>/pipeline_info/ folder. There you'll find a hash number for that process. You can use that hash number to locate that process's working directory. This directory will contain multiple .command.* files that correspond to your run and contain valuable information that can help you debug your error. You can also run the .command.sh script to do a manual, isolated execution of the offending process for quick testing.
Credits
nf-jabba was written by Tanubrata Dey and Shihab Dider at the Perlmutter Cancer Center and the New York Genome Center.
We thank the following people for their extensive guidance in the development of this pipeline: - Marcin Imielinski - Joel Rosiene
Contributions and Support
If you would like to contribute to this pipeline, please see the contributing guidelines.
Owner
- Name: mskilab-org
- Login: mskilab-org
- Kind: organization
- Repositories: 1
- Profile: https://github.com/mskilab-org
Citation (CITATIONS.md)
# mskilab-org/nf-jabba: Citations ## [nf-core](https://pubmed.ncbi.nlm.nih.gov/32055031/) > Ewels PA, Peltzer A, Fillinger S, Patel H, Alneberg J, Wilm A, Garcia MU, Di Tommaso P, Nahnsen S. The nf-core framework for community-curated bioinformatics pipelines. Nat Biotechnol. 2020 Mar;38(3):276-278. doi: 10.1038/s41587-020-0439-x. PubMed PMID: 32055031. ## [Nextflow](https://pubmed.ncbi.nlm.nih.gov/28398311/) > Di Tommaso P, Chatzou M, Floden EW, Barja PP, Palumbo E, Notredame C. Nextflow enables reproducible computational workflows. Nat Biotechnol. 2017 Apr 11;35(4):316-319. doi: 10.1038/nbt.3820. PubMed PMID: 28398311. ## Pipeline tools - [FastQC](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/) > Andrews, S. (2010). FastQC: A Quality Control Tool for High Throughput Sequence Data [Online]. Available online https://www.bioinformatics.babraham.ac.uk/projects/fastqc/. - [MultiQC](https://pubmed.ncbi.nlm.nih.gov/27312411/) > Ewels P, Magnusson M, Lundin S, Käller M. MultiQC: summarize analysis results for multiple tools and samples in a single report. Bioinformatics. 2016 Oct 1;32(19):3047-8. doi: 10.1093/bioinformatics/btw354. Epub 2016 Jun 16. PubMed PMID: 27312411; PubMed Central PMCID: PMC5039924. ## Software packaging/containerisation tools - [Anaconda](https://anaconda.com) > Anaconda Software Distribution. Computer software. Vers. 2-2.4.0. Anaconda, Nov. 2016. Web. - [Bioconda](https://pubmed.ncbi.nlm.nih.gov/29967506/) > Grüning B, Dale R, Sjödin A, Chapman BA, Rowe J, Tomkins-Tinch CH, Valieris R, Köster J; Bioconda Team. Bioconda: sustainable and comprehensive software distribution for the life sciences. Nat Methods. 2018 Jul;15(7):475-476. doi: 10.1038/s41592-018-0046-7. PubMed PMID: 29967506. - [BioContainers](https://pubmed.ncbi.nlm.nih.gov/28379341/) > da Veiga Leprevost F, Grüning B, Aflitos SA, Röst HL, Uszkoreit J, Barsnes H, Vaudel M, Moreno P, Gatto L, Weber J, Bai M, Jimenez RC, Sachsenberg T, Pfeuffer J, Alvarez RV, Griss J, Nesvizhskii AI, Perez-Riverol Y. BioContainers: an open-source and community-driven framework for software standardization. Bioinformatics. 2017 Aug 15;33(16):2580-2582. doi: 10.1093/bioinformatics/btx192. PubMed PMID: 28379341; PubMed Central PMCID: PMC5870671. - [Docker](https://dl.acm.org/doi/10.5555/2600239.2600241) > Merkel, D. (2014). Docker: lightweight linux containers for consistent development and deployment. Linux Journal, 2014(239), 2. doi: 10.5555/2600239.2600241. - [Singularity](https://pubmed.ncbi.nlm.nih.gov/28494014/) > Kurtzer GM, Sochat V, Bauer MW. Singularity: Scientific containers for mobility of compute. PLoS One. 2017 May 11;12(5):e0177459. doi: 10.1371/journal.pone.0177459. eCollection 2017. PubMed PMID: 28494014; PubMed Central PMCID: PMC5426675.
GitHub Events
Total
- Issues event: 3
- Watch event: 2
- Issue comment event: 2
- Push event: 2
- Fork event: 1
Last Year
- Issues event: 3
- Watch event: 2
- Issue comment event: 2
- Push event: 2
- Fork event: 1
Dependencies
- actions/upload-artifact v3 composite
- seqeralabs/action-tower-launch v2 composite
- actions/upload-artifact v3 composite
- seqeralabs/action-tower-launch v2 composite
- mshick/add-pr-comment v1 composite
- actions/checkout v3 composite
- nf-core/setup-nextflow v1 composite
- actions/stale v7 composite
- actions/checkout v3 composite
- actions/setup-node v3 composite
- actions/checkout v3 composite
- actions/setup-node v3 composite
- actions/setup-python v4 composite
- actions/upload-artifact v3 composite
- mshick/add-pr-comment v1 composite
- nf-core/setup-nextflow v1 composite
- psf/black stable composite
- dawidd6/action-download-artifact v2 composite
- marocchino/sticky-pull-request-comment v2 composite
- ascat 3.1.1.*
- cancerit-allelecount 4.3.0.*