https://github.com/lczech/grenedalf-paper

Code for tests and benchmarks of our paper on grenedalf

https://github.com/lczech/grenedalf-paper

Science Score: 26.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.0%) to scientific vocabulary
Last synced: 4 months ago · JSON representation

Repository

Code for tests and benchmarks of our paper on grenedalf

Basic Info
  • Host: GitHub
  • Owner: lczech
  • License: gpl-3.0
  • Language: Python
  • Default Branch: master
  • Size: 53 MB
Statistics
  • Stars: 0
  • Watchers: 2
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created almost 5 years ago · Last pushed over 1 year ago
Metadata Files
Readme License

README.md

grenedalf-paper

Code for tests and benchmarks of the paper on our tool grenedalf:

grenedalf: population genetic statistics for the next generation of pool sequencing.
Lucas Czech, Jeffrey P. Spence, Moisés Expósito-Alonso.
Bioinformatics, 2024. doi:10.1093/bioinformatics/btae508

We here provide tests scripts to benchmark grenedalf against existing tools:

See the software directory here for their setup. For the plotting, we furthermore need some python tools, as specified in the common/conda.yaml file. As always with these things, versions have to be exact.

We run the following tests here:

  • benchmark-grenenet: Benchmarks on real-world data from GrENE-net, subsetting one or two files to increasing numbers of positions to show scaling with respect to the genome length.
  • benchmark-random: Simple benchmarks based on randomly generated files, as a lower boundary of how much faster grenedalf is compared to its competitors.
  • benchmark-samples: Benchmarks on real-world data from GrENE-net, increasing the number of files to show scaling wrt number of samples.
  • benchmark-scaling: Benchmarks for strong and weak scaling of grenedalf on multi-core systems, with a small dataset.
  • benchmark-scaling-fst: Benchmarks for strong and weak scaling of grenedalf on multi-core systems, with a larger dataset that shows better scaling.

Furthermore, we have some auxiliary tests and comparisons:

  • eval-bug-exam: Examination of the two bugs in PoPoolation Tajima's D implementation.
  • eval-corr-grenenet: Test how the results from grenedalf correlate with those of other tools.
  • eval-fst-biases: Evaluation of the biases of different Pool-seq estimators of FST, as shown in our equations document.
  • eval-grenenet: Quick test to assess the overall gain of grenedalf for our GrENE-net project.
  • eval-independent-test: An independent bare-bone Python implementation of our equations, to check that the results of grenedalf are exactly as expected.
  • example-cathedral: A prototype implementation of the cathedral plot for fst.
  • example-fst-ordination: A simple large-scale example of using grenedalf on thousands of samples.

See the respective subdirectories for details.

Owner

  • Name: Lucas Czech
  • Login: lczech
  • Kind: user
  • Location: Stanford, USA
  • Company: Carnegie Institution for Science

Postdoc in bioinformatics :seedling: and computer scientist :octocat: working on inter-disciplinary ways to save the planet :earth_africa:

GitHub Events

Total
Last Year

Issues and Pull Requests

Last synced: 11 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels