mhc-hammer

Pipeline to detect HLA disruption from WES and RNAseq data

https://github.com/mcgranahanlab/mhc-hammer

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 4 DOI reference(s) in README
  • Academic publication links
    Links to: pubmed.ncbi, ncbi.nlm.nih.gov, nature.com, zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (8.8%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Pipeline to detect HLA disruption from WES and RNAseq data

Basic Info
  • Host: GitHub
  • Owner: McGranahanLab
  • License: other
  • Language: R
  • Default Branch: main
  • Size: 5 MB
Statistics
  • Stars: 16
  • Watchers: 3
  • Forks: 6
  • Open Issues: 5
  • Releases: 0
Created about 2 years ago · Last pushed about 1 year ago
Metadata Files
Readme License Citation

README.md

Introduction

Disruption of the class I human leukocyte antigen (HLA) molecules has important implications for immune evasion and tumor evolution. To evaluate the extent of genomic and transcriptomic HLA disruption, we developed MHC Hammer, which has the following four major components: (1) identifying allele-specific HLA somatic mutations, (2) calculating HLA LOH, (3) evaluating HLA allele-specific repression and (4) identifying allele-specific HLA alternative splicing.

diagram

You can find our MHC Hammer publication here: https://www.nature.com/articles/s41588-024-01883-8

MHC Hammer requires every patient to have a whole exome sequencing (WES) germline blood sample. In addition, MHC Hammer requires the following inputs:

To estimate DNA HLA allelic imbalance and somatic mutations: - A tumour WES BAM file. To estimate DNA HLA copy number and LOH: - A tumour WES BAM file with purity and ploidy estimates. To estimate RNA HLA allelic expression, allelic imbalance and alternative splicing: - A tumour or normal RNAseq BAM file. To estimate RNA HLA allelic repression: - A tumour and normal RNAseq BAM file.

Pipeline overview

diagram

Steps before running the pipeline

1. Installing Nextflow and Singularity

  1. Install Nextflow (>=21.10.3)

  2. Install Singularity

2. Make an inventory file

You need to create a inventory file with the following columns: - patient - the patient name. MHC Hammer will replace spaces in the patient name with underscores. Required. - samplename - the sample name. MHC Hammer will replace spaces in the sample name with underscores. Required. - sampletype - either tumour or normal. Required. - bampath - full path to the WXS or RNAseq BAM file. Required. - sequencingtype - either wxs or rnaseq. Required. - purity - the purity of the tumour region. Can be left empty. - ploidy - the ploidy of the tumour region. Can be left empty. - normalsamplename - when sequencingtype is wxs this is the matched germline WXS. When sequencingtype is rnaseq this is the matched RNAseq normal name. Can be left empty.

The inventory should be a csv file and is input to the pipeline with the --input parameter.

The following is an example inventory for a single patient with: - two tumour regions with WXS (samplename1 and samplename2), one of which has RNAseq (samplename1) - one germline WXS sample (samplename3) - one normal RNAseq sample (sample_name4)

| patient | samplename | sampletype | bampath | sequencingtype | purity | ploidy | normalsamplename | | :------: | :----------: | :---------: | :----------------------: | :-------------: | :----: | :----: | :--------------: | | patient1 | samplename1 | tumour | path/to/samplename1.bam | wxs | 0.5 | 3 | samplename3 | | patient1 | samplename2 | tumour | path/to/samplename2.bam | wxs | 0.3 | 2.5 | samplename3 | | patient1 | samplename3 | normal | path/to/samplename3.bam | wxs | | | | | patient1 | samplename1 | tumour | path/to/samplename4.bam | rnaseq | | | samplename4 | | patient1 | samplename4 | normal | path/to/sample_name5.bam | rnaseq | | | |

3. Clone this repo

bash git clone git@github.com:McGranahanLab/mhc-hammer.git mkdir mhc-hammer/singularity_images cd mhc-hammer project_dir=${PWD}

4. Download the MHC Hammer reference files

The MHC Hammer refernece files are created from sequences stored in the IMGT database. We have created MHC Hammer references from two IMGT versions: - version 3.38, which can be downloaded from https://zenodo.org/records/11059410 - version 3.55, which can be downloaded from https://zenodo.org/records/12606532

This should download two folders, kmer_files and mhc_references. Save these folders in the assets folder: - assets/kmer_files/imgt_30mers.fa - This file contains all 30mers created from the sequences in the IMGT database. For an overview of how this file was created see docs/mhc_reference_files.md - assets/mhc_references - this folder contains the MHC reference files used in the MHC Hammer pipeline. For an overview of how these file were created see docs/mhc_reference_files.md

5. HLA allele typing

Every sample run through MHC Hammer requires HLA allele types. MHC Hammer provides three options for typing HLA alleles: 1. Install HLA-HD locally. MHC Hammer will run the locally installed HLA-HD. 2. Create a container containing HLA-HD. MHC Hammer will run HLA-HD using this container. 3. Provide HLA allele types as an input to MHC Hammer, in this case MHC Hammer will not run HLA-HD.

The HLA allele types predicted by HLA-HD (option 1 or 2) or input to MHC Hammer (option 3) must match the alleles in the MHC Hammer reference files

This means that if using HLA-HD within MHC Hammer (option 1 or 2) the reference version used by HLA-HD must be the same as the IMGT reference version used to create the MHC Hammer reference files. If HLA allele types are input to MHC Hammer, these allele types must be present in the MHC Hammer reference files. More information on this is prodived below.

Option 1: Install HLA-HD and its dependencies locally (recommended)

The steps are as follows: 1. On the HLA-HD website fill in the download request form to get a download link for HLA-HD 2. Move the downloaded hlahd.version.tar.gz file into the project bin directory. bash mv /path/to/hlahd_download.tar.gz ${project_dir}/bin/ 3. Run the install_hlahd.sh script. This script will: - install HLA-HD and bowtie2 (2.5.1) and store them in the ${project_dir}/bin/ directory. - update the HLA-HD allele dictionary to the IMGT database version 3.55. This is the same IMGT version that was used to make the reference files which can be downloaded from https://zenodo.org/records/12606532

This install_hlahd.sh script requires: - g++, wget and python3 to be installed - The mhchammerpreprocessinglatest.sif to be in the `$projectdir/singularityimages/folder (see below) - Thehlahddownload` variable to be set as the path to /path/to/hlahd_download.tar.gz.

To download the mhchammerpreprocessinglatest.sif container: ```bash cd ${projectdir}/singularityimages singularity pull --arch amd64 library://tpjones15/default/mhchammerpreprocessing:latest mhchammerpreprocessingsif="${projectdir}/singularityimages/mhchammerpreprocessing_latest.sif" ```

Then, run install_hlahd.sh: bash bash ${project_dir}/scripts/install_hlahd.sh -p ${project_dir} -h ${hlahd_download}

If you want to use a different version of the IMGT database with HLA-HD you can change line 14 in bin/update.dictionary.alt.sh to your choosen version of the IMGT database: ```bash wget https://github.com/ANHIG/IMGTHLA/raw/3550/hla.dat.zip ## this downloads version 3.55

## For example, for version 3.38, replace line above with: wget https://github.com/ANHIG/IMGTHLA/raw/3380/hla.dat.zip

## Or, for the latest version: wget https://github.com/ANHIG/IMGTHLA/raw/Latest/hla.dat.zip

`` **Remember that the HLA-HD database version should match the version used to create the files in theassets/mhc_references` folder.**

  1. When running the pipeline ensure you run with --hlahd_local_install true (default)

Option 2: Create your own HLA-HD singularity container

We are unable to provide a singularity container for HLA-HD tool. Instead, we have provided steps to create your own container:

  1. On the HLA-HD website fill in the download request form to get a download link for HLA-HD
  2. Edit the assets/hlahd_container.def file:
    • Update the /path/to/downloaded/hlahd.version.tar.gz in the %files section
    • Update the /path/to/project_dir/bin/update.dictionary.alt.sh in the %files section
    • Update the HLAHD_VERSION variable in the %post section
  3. Build the singularity image: bash singularity build hlahd.sif assets/hlahd_container_template.def
  4. Move the image into the singularityimages directory ```bash mv hlahd.sif singularityimages ```
  5. When running the MHC Hammer pipeline ensure you run with --hlahd_local_install false.

If you want to use a different version of the IMGT database with HLA-HD you can change line 14 in bin/update.dictionary.alt.sh to your choosen version of the IMGT database before building the image: ```bash wget https://github.com/ANHIG/IMGTHLA/raw/3550/hla.dat.zip ## this downloads version 3.55

## For example, for version 3.38, replace line above with: wget https://github.com/ANHIG/IMGTHLA/raw/3380/hla.dat.zip

## Or, for the latest version: wget https://github.com/ANHIG/IMGTHLA/raw/Latest/hla.dat.zip

`` **Remember that the HLA-HD database version should match the version used to create the files in theassets/mhc_references` folder.**

Option 3: Input HLA alleles to MHC Hammer

If you already have HLA allele types for your samples you can skip the HLA-HD step in the pipeline. To do this: - add a new column to the inventory called hla_alleles_path that contains the path to a csv file listing the HLA alleles. This table should have three columns with no column names. The columns are: - Gene - Allele 1 type - Allele 2 type

An example of the file format can be found here: https://github.com/McGranahanLab/mhc-hammer/blob/main/test/data/SIM001hlaalleles.csv

  • run the pipeline with the --run_hlahd false flag.

Remember that the alleles input to MHC Hammer must be present in the MHC Hammer reference files in the assets/mhc_references folder. You can get a list of alleles from the fasta file, e.g. grep '^>' assets/mhc_references/mhc_genome.fasta

6. Download Novoalign

MHC Hammer uses Novoalign to align putative HLA reads. Novoalign must be downloaded by the user and installed locally in order to run MHC Hammer. The steps to do this are as follows: 1. Go to the Novocraft website 2. Download the desired version (V3.09.04 was used in the manuscript) - also note that only versions V3 and earlier are available for users without a lisence. Users with a license can download later versions and make use of multi-threading when aligning reads. For more information on which version is applicable to users, please read the licensing information within the downloads page. 3. Move the downloaded novocraft.version.tar.gz file into the bin directory. bash mv /path/to/novocraft.version.tar.gz ${project_dir}/bin/ 4. Download the mhchammerpreprocessinglatest.sif container: ```bash cd ${projectdir}/singularityimages singularity pull --arch amd64 library://tpjones15/default/mhchammerpreprocessing:latest mhchammerpreprocessingsif="${projectdir}/singularityimages/mhchammerpreprocessinglatest.sif" cd ${projectdir} 5. Run the [install_novoalign.sh](scripts/install_novoalign.sh) script. bash novocraftdownload=novocraft.version.tar.gz # full path or filename is ok bash ${projectdir}/scripts/installnovoalign.sh -p ${projectdir} -h ${novocraftdownload} 6. If you are using a licensed version of Novocraft you can make use of multi-threading by updating input parameters--licensednovocraft true --novoalignnumthreads Maxnumberofthreadsto_use```

7. Update the HPC config files

The conf/hpc.config file controls how the pipeline is run on your HPC system. Before running the pipeline you may want to update the variables in conf/hpc.config to suit your HPC system. In particular, it might be useful to specify the singularity bind directory by adding bash singularity { runOptions = "-B /bind_directory" } to conf/hpc.config, and changing bind_directory to your choosen path. You may also need to add the name of your HPC queue by adding

bash process { queue = 'cpu' } to conf/hpc.config, and changing cpu to the name of the HPC queue that you are using.

Alternatively, if it exists, you can use a config file specific for your institute. See this page for more information on nextflow config files.

8. Update the MHC Hammer pipeline parameters

You can change the MHC Hammer pipeline parameters from the default in the nextflow.conf file. Alternatively, you can change the parameters by inputting them directly when you run the pipeline. For a full overview of the pipeline parameters run: bash nextflow run main.nf --help --show_hidden_params

Running the MHC Hammer pipeline

To run the MHC Hammer pipeline: bash nextflow run main.nf \ --input /path/to/inventory \ -c conf/hpc.config -resume

This command needs to be run from the project directory.

The -resume flag tells the pipeline to not rerun tasks that have sucessfully completed. See this page for more information on Nextflow caching.

To change a pipeline parameter, either change the parameter in the nextflow.conf file, or directly as an input to the pipeline. Parameters input to the pipeline take precedence over parameters in the nextflow.conf file. For example, to change the min_depth parameter: bash nextflow run main.nf \ --input /path/to/inventory \ -c conf/hpc.config \ --min_depth 5 -resume

Running the MHC Hammer pipeline with subsetted BAM files and flagstat output

If you already have subsetted BAM files and flagstat output, you can input these to the MHC Hammer pipeline instead of rerunning these steps. To do this: - the bam_path column in the inventory file should contain the path to the subsetted BAM files - add a new column to the inventory called library_size_path that contains the path to a text file with the library size for the sample. This can be calculated from the flagstat output. - run the pipeline with the --run_bam_subsetting false flag.

MHC Hammer pipeline outputs

By defult, the output is saved in the working directory in a folder called mhc_hammer_results. See docs/mhc_hammer_outputs.md for an overview of all outputs from MHC Hammer.

Test dataset

A test dataset is provided. The input BAMs and inventory are in the test/data folder. Note that you will need to update the inventory columns bam_path and hla_alleles_path so that they contain the full paths to the files.

To run the pipeline with the test dataset, including the HLA-HD step: bash nextflow run main.nf -profile test,singularity --input test/data/mhc_hammer_test_inventory.csv

To run the pipeline with the test dataset, without the HLA-HD step: bash nextflow run main.nf -profile test,singularity --input test/data/mhc_hammer_test_inventory.csv --run_hlahd false

The output will be saved in the test/results folder.

Files downloaded in the assets directory

Files downloaded with the git repository - codon_table.csv - contains a mapping between codons and amino acids, this is used to determine the consequence of alternate splicing events in the HLA alleles. - contigs_placeholder.txt - This is a placeholder for the subset BAM module. It will be ignored if user inputs a new path to a contigs file. - hlahd_container_template.def - A template for making a HLA-HD singularity file - mhc_coords_chr6.txt - these genomic coordinates can be used when subsetting the bams. Any reads falling within these coordinates are included in the subsetted bams. - strand_info.txt - contains a mapping between the HLA gene and the strand (forward="+" or reverse="-") - transcriptome_placeholder.txt - A placeholder so the pipeline will run with only WXS data.

Citations

This pipeline uses code and infrastructure developed and maintained by the nf-core initative, and reused here under the MIT license.

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

Owner

  • Name: McGranahanLab
  • Login: McGranahanLab
  • Kind: organization

Citation (CITATIONS.md)

# McGranahanLab/mhc_hammer: Citations

## [nf-core](https://pubmed.ncbi.nlm.nih.gov/32055031/)

> Ewels PA, Peltzer A, Fillinger S, Patel H, Alneberg J, Wilm A, Garcia MU, Di Tommaso P, Nahnsen S. The nf-core framework for community-curated bioinformatics pipelines. Nat Biotechnol. 2020 Mar;38(3):276-278. doi: 10.1038/s41587-020-0439-x. PubMed PMID: 32055031.

## [Nextflow](https://pubmed.ncbi.nlm.nih.gov/28398311/)

> Di Tommaso P, Chatzou M, Floden EW, Barja PP, Palumbo E, Notredame C. Nextflow enables reproducible computational workflows. Nat Biotechnol. 2017 Apr 11;35(4):316-319. doi: 10.1038/nbt.3820. PubMed PMID: 28398311.

## [IMGT/HLA](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC29780/)

> Robinson J, Waller MJ, Parham P, Bodmer JG, Marsh SG. IMGT/HLA Database--a sequence database for the human major histocompatibility complex. Nucleic Acids Res. 2001 Jan 1;29(1):210-3. doi: 10.1093/nar/29.1.210. PMID: 11125094; PMCID: PMC29780.

## Pipeline tools

* [BCFTools](https://pubmed.ncbi.nlm.nih.gov/21903627/)
    > Li H. A statistical framework for SNP calling, mutation discovery, association mapping and population genetical parameter estimation from sequencing data. Bioinformatics. 2011 Nov 1;27(21):2987-93. doi: 10.1093/bioinformatics/btr509. Epub 2011 Sep 8. PMID: 21903627; PMCID: PMC3198575.

* [BEDTools](https://pubmed.ncbi.nlm.nih.gov/20110278/)
    > Quinlan AR, Hall IM. BEDTools: a flexible suite of utilities for comparing genomic features. Bioinformatics. 2010 Mar 15;26(6):841-2. doi: 10.1093/bioinformatics/btq033. Epub 2010 Jan 28. PubMed PMID: 20110278; PubMed Central PMCID: PMC2832824.

* [Bowtie2](https://pubmed.ncbi.nlm.nih.gov/22388286/)
    > Langmead B, Salzberg SL. Fast gapped-read alignment with Bowtie 2. Nat Methods. 2012 Mar 4;9(4):357-9. doi: 10.1038/nmeth.1923. PMID: 22388286; PMCID: PMC3322381.

* [ensembl-VEP](https://pubmed.ncbi.nlm.nih.gov/27268795/)
    > McLaren W, Gil L, Hunt SE, Riat HS, Ritchie GR, Thormann A, Flicek P, Cunningham F. The Ensembl Variant Effect Predictor. Genome Biol. 2016 Jun 6;17(1):122. doi: 10.1186/s13059-016-0974-4. PMID: 27268795; PMCID: PMC4893825.

* [FastQC](https://www.bioinformatics.babraham.ac.uk/projects/fastqc/)

* [GATK](https://pubmed.ncbi.nlm.nih.gov/20644199/)
    > McKenna A, Hanna M, Banks E, Sivachenko A, Cibulskis K, Kernytsky A, Garimella K, Altshuler D, Gabriel S, Daly M, DePristo MA. The Genome Analysis Toolkit: a MapReduce framework for analyzing next-generation DNA sequencing data. Genome Res. 2010 Sep;20(9):1297-303. doi: 10.1101/gr.107524.110. Epub 2010 Jul 19. PMID: 20644199; PMCID: PMC2928508.

* [HLA-HD](https://pubmed.ncbi.nlm.nih.gov/28419628/)
    > Kawaguchi S, Higasa K, Shimizu M, Yamada R, Matsuda F. HLA-HD: An accurate HLA typing algorithm for next-generation sequencing data. Hum Mutat. 2017 Jul;38(7):788-797. doi: 10.1002/humu.23230. Epub 2017 May 12. PMID: 28419628.

* [Jellyfish](https://pubmed.ncbi.nlm.nih.gov/21217122/)
    > Marçais G, Kingsford C. A fast, lock-free approach for efficient parallel counting of occurrences of k-mers. Bioinformatics. 2011 Mar 15;27(6):764-70. doi: 10.1093/bioinformatics/btr011. Epub 2011 Jan 7. PMID: 21217122; PMCID: PMC3051319.

* [Mosdepth](https://pubmed.ncbi.nlm.nih.gov/29096012/)
    > Pedersen BS, Quinlan AR. Mosdepth: quick coverage calculation for genomes and exomes. Bioinformatics. 2018 Mar 1;34(5):867-868. doi: 10.1093/bioinformatics/btx699. PMID: 29096012; PMCID: PMC6030888.

* [NovoAlign](https://www.novocraft.com/products/novoalign/)

* [picard-tools](http://broadinstitute.github.io/picard)

* [SAMtools](https://pubmed.ncbi.nlm.nih.gov/19505943/)
    > Li H, Handsaker B, Wysoker A, Fennell T, Ruan J, Homer N, Marth G, Abecasis G, Durbin R; 1000 Genome Project Data Processing Subgroup. The Sequence Alignment/Map format and SAMtools. Bioinformatics. 2009 Aug 15;25(16):2078-9. doi: 10.1093/bioinformatics/btp352. Epub 2009 Jun 8. PMID: 19505943; PMCID: PMC2723002.

* [STAR](https://pubmed.ncbi.nlm.nih.gov/23104886/)
    > Dobin A, Davis CA, Schlesinger F, Drenkow J, Zaleski C, Jha S, Batut P, Chaisson M, Gingeras TR. STAR: ultrafast universal RNA-seq aligner. Bioinformatics. 2013 Jan 1;29(1):15-21. doi: 10.1093/bioinformatics/bts635. Epub 2012 Oct 25. PMID: 23104886; PMCID: PMC3530905.

* [Tabix](https://academic.oup.com/bioinformatics/article/27/5/718/262743)
    > Heng Li, Tabix: fast retrieval of sequence features from generic TAB-delimited files, Bioinformatics, Volume 27, Issue 5, 1 March 2011, Pages 718–719

## R packages

* [R](https://www.r-project.org/)
    >  R Core Team (2020). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria.

* [argparse](https://cran.r-project.org/package=argparse)
    >  Trevor L Davis (2022). argparse: Command Line Optional and Positional Argument Parser

* [Biostrings](https://bioconductor.org/packages/Biostrings)
    >  Pagès H, Aboyoun P, Gentleman R, DebRoy S. Biostrings: Efficient manipulation of biological strings

* [data.table](https://CRAN.R-project.org/package=data.table)
    >  Matt Dowle and Arun Srinivasan (2022). data.table: Extension of "data.frame".

* [deepSNV](https://pubmed.ncbi.nlm.nih.gov/22549840/)
    >  Gerstung M, Beisel C, Rechsteiner M, Wild P, Schraml P, Moch H, Beerenwinkel N. Reliable detection of subclonal single-nucleotide variants in tumour cell populations. Nat Commun. 2012 May 1;3:811. doi: 10.1038/ncomms1814. PMID: 22549840.

* [ggbeeswarm](https://CRAN.R-project.org/package=ggbeeswarm)
    >  Erik Clarke and Scott Sherrill-Mix (2017). ggbeeswarm: Categorical Scatter (Violin Point) Plots.

* [ggplot2](https://ggplot2.tidyverse.org)
    > H. Wickham. ggplot2: Elegant Graphics for Data Analysis. Springer-Verlag New York, 2016.

* [ggpubr](https://CRAN.R-project.org/package=ggpubr)
    >  Alboukadel Kassambara (2020). ggpubr: 'ggplot2' Based Publication Ready Plots.

* [Rsamtools](https://bioconductor.org/packages/Rsamtools)
    >  Martin Morgan, Hervé Pagès, Valerie Obenchain and Nathaniel Hayden (2020). Rsamtools: Binary alignment (BAM), FASTA, variant call (BCF), and tabix file import.

* [SeqinR](https://link.springer.com/chapter/10.1007/978-3-540-35306-5_10)
    >  Charif D, Lobry J (2007). “SeqinR 1.0-2: a contributed package to the R project for statistical computing devoted to biological sequences retrieval and analysis.” In Bastolla U, Porto M, Roman H, Vendruscolo M (eds.), Structural approaches to sequence evolution: Molecules, networks, populations, series Biological and Medical Physics, Biomedical Engineering, 207-232. Springer Verlag, New York. ISBN : 978-3-540-35305-8.
  
## Software packaging/containerisation tools

* [Singularity](https://pubmed.ncbi.nlm.nih.gov/28494014/)
    > Kurtzer GM, Sochat V, Bauer MW. Singularity: Scientific containers for mobility of compute. PLoS One. 2017 May 11;12(5):e0177459. doi: 10.1371/journal.pone.0177459. eCollection 2017. PubMed PMID: 28494014; PubMed Central PMCID: PMC5426675.

GitHub Events

Total
  • Issues event: 17
  • Watch event: 6
  • Delete event: 2
  • Issue comment event: 23
  • Push event: 6
  • Pull request event: 9
  • Fork event: 4
  • Create event: 2
Last Year
  • Issues event: 17
  • Watch event: 6
  • Delete event: 2
  • Issue comment event: 23
  • Push event: 6
  • Pull request event: 9
  • Fork event: 4
  • Create event: 2

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 9
  • Total pull requests: 1
  • Average time to close issues: 21 days
  • Average time to close pull requests: 1 minute
  • Total issue authors: 7
  • Total pull request authors: 1
  • Average comments per issue: 1.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 9
  • Pull requests: 1
  • Average time to close issues: 21 days
  • Average time to close pull requests: 1 minute
  • Issue authors: 7
  • Pull request authors: 1
  • Average comments per issue: 1.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • ilobon (3)
  • joshualausj (2)
  • ShengYou-L (2)
  • xuxingyubio (2)
  • nitishnih (1)
  • colinhercus (1)
  • Davte (1)
  • abdMalikAhmad (1)
  • Laurakathi1 (1)
  • wir963 (1)
  • vmurigneu (1)
  • anabbi (1)
  • harshsharma-cb (1)
Pull Request Authors
  • Davte (3)
  • tpjones15 (2)
  • clareputtick (1)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

modules/nf-core/modules/custom/dumpsoftwareversions/meta.yml cpan