glotzerlab_citations

Collection of citations commonly used in papers in the Glotzer group

https://github.com/glotzerlab/glotzerlab_citations

Science Score: 62.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
    2 of 3 committers (66.7%) from academic institutions
  • Institutional organization owner
    Organization glotzerlab has institutional domain (glotzerlab.engin.umich.edu)
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (15.3%) to scientific vocabulary

Keywords from Contributors

computational-geometry geometry physics polygons polyhedra shapes
Last synced: 7 months ago · JSON representation ·

Repository

Collection of citations commonly used in papers in the Glotzer group

Basic Info
  • Host: GitHub
  • Owner: glotzerlab
  • License: cc0-1.0
  • Language: TeX
  • Default Branch: master
  • Size: 197 KB
Statistics
  • Stars: 2
  • Watchers: 6
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created almost 6 years ago · Last pushed almost 6 years ago
Metadata Files
Readme License Citation

README.md

Citing software.

This repository contains a guide on how to cite standard software used by the Glotzer lab at the University of Michigan. The repository contains a Bibtex bib file suitable for direct copying into the bibliography for any paper. It includes standard tools in the SciPy stack as well as more specific tools for used by the Glotzer group at the University of Michigan. A minimal LaTeX file is included to simplify the generation of a PDF (included in the repo as citations.pdf) with formatted citations in case a fully formatted citation is needed for copying into a document editor like Microsoft Word.

Some general guidelines:

  • Almost all papers in our group should include citations of NumPy and HOOMD-blue.
  • Most papers should also cite signac.
  • Most group members use Matplotlib for plot generation and should cite it.
  • Users of freud and HOOMD-blue should check the lists below to make sure that you are also citing the papers corresponding to specific features. Common examples include HPMC, PMFTs, and environment matching.
  • Pandas, SciPy, and Scikit-learn are commonly used for specific applications, so if you use them in your research please copy those citations from below.
  • Anyone directly performing quaternion operations in their scripts should cite rowan.

Recommended in-text citations

The following are sentences that can be reasonably copy-pasted into either the introduction, conclusion, or acknowledgments section of a LaTeX manuscript (assuming citations.bib is copied into the paper's bibliography file).

  • This work makes use of NumPy \cite{Oliphant2006, vanderWalt2011} and simulations were conducted using the HOOMD-blue simulation toolkit \cite{Anderson2020}.
  • All figures in this paper were generated with Matplotlib \cite{Hunter2007} unless otherwise noted.
  • Data and workflows were managed using the signac framework \cite{Adorf2018a,Ramasubramani2018b}

Specific software citations

The following lists provide the LaTeX citation keys that should be used for each document.

SciPy Stack

  • NumPy: Oliphant2006, vanderWalt2011
  • Matplotlib: Hunter2007
  • SciPy: Virtanen2020
  • Pandas: McKinney2010
  • Scikit-learn: Pedregosa2011
  • Cython: Behnel2011
  • Mayavi: Ramachandran2011
  • TensorFlow: Abadi2015

HOOMD-blue

The generic HOOMD-blue citations (the first line below) should be used in any paper that uses HOOMD-blue. The following citations are feature-specific and should be included based on what features of HOOMD-blue are used in a given manuscript.

  • HOOMD-blue: Anderson2020
  • HPMC: Anderson2016
  • Depletion: Glaser2015b
  • MPI Domain decomposition: Glaser2015a
  • Intra-node scaling on multiple GPUs: Glaser2020a
  • DEM: Spellings2017
  • Rigid bodies: Nguyen2011, Glaser2020
  • PPPM: Lebard2012
  • DPD: Phillips2011
  • Tree or stencil neighbor list:: Howard2016

signac

  • Adorf2018a, Ramasubramani2018b

freud

The generic freud citation should be used in any paper that uses freud. The following citations are feature-specific and should be included based on what features of freud are used in a given manuscript.

  • freud: Ramasubramani2020
  • ML/visualization: Dice2019
  • Steinhardt OPs: Steinhardt1983
  • Neighbor-averaged Steinhardt OPs: Lechner2008
  • PMFTs: VanAnders2014c, vanAnders2014d
  • Environment Matching: Teich2019
  • MSDs: Calandrini2011
  • Voronoi: Rycroft2009
  • Cubatic OP: Haji-Akbari2015
  • Rotational autocorrelation: Karas2019

fresnel

There is currently no citation for fresnel.

rowan

  • Ramasubramani2018

Contributing

To add new citations, update the citations.bib file. The citation key should be in AuthorYear format (with additional letters as needed, e.g. "Anderson2018a"). If the citation is related to an existing software package such as a new feature for HOOMD-blue, it should be placed in the appropriate section of the bib file. Then, update the Specific Software Citations section of this document. As a convenience, rerun LaTeX on the provided tex file to regenerate the PDF containing formatted citations.

License

Written in 2020 by Vyas Ramasubramani (vramasub@umich.edu). To the extent possible under law, the author(s) have dedicated all copyright and related and neighboring rights to the public domain worldwide. This content is distributed without any warranty. You should have received a copy of the CC0 Public Domain Dedication along with this repository. If not, see http://creativecommons.org/publicdomain/zero/1.0/.

Owner

  • Name: Glotzer Group
  • Login: glotzerlab
  • Kind: organization
  • Location: University of Michigan

We develop molecular simulation tools to study the self-assembly of complex materials and explore matter at the nanoscale.

Citation (citations.bib)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%% SciPy Stack %%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% Numpy

@book{Oliphant2006,
	author = {Oliphant, Travis E.},
	publisher = {Trelgol Publishing},
	title = {{A guide to NumPy}},
	year = {2006}
}

@article{vanderWalt2011,
	author={S. {van der Walt} and S. C. {Colbert} and G. {Varoquaux}},
	journal={Computing in Science   Engineering},
	title={The NumPy Array: A Structure for Efficient Numerical Computation},
	year={2011},
	volume={13},
	number={2},
	pages={22-30}
}

% Matplotlib

@article{Hunter2007,
	author = {Hunter, John D.},
	doi = {10.1109/MCSE.2007.55},
	journal = {Computing in Science {\&} Engineering},
	number = {3},
	pages = {90--95},
	title = {{Matplotlib: A 2D Graphics Environment}},
	volume = {9},
	year = {2007}
}

% SciPy

@ARTICLE{Virtanen2020,
	author = {{Virtanen}, Pauli and {Gommers}, Ralf and {Oliphant},
		   Travis E. and {Haberland}, Matt and {Reddy}, Tyler and
		   {Cournapeau}, David and {Burovski}, Evgeni and {Peterson}, Pearu
		   and {Weckesser}, Warren and {Bright}, Jonathan and {van der Walt},
		   St{\'e}fan J.  and {Brett}, Matthew and {Wilson}, Joshua and
		   {Jarrod Millman}, K.  and {Mayorov}, Nikolay and {Nelson}, Andrew
		   R.~J. and {Jones}, Eric and {Kern}, Robert and {Larson}, Eric and
		   {Carey}, CJ and {Polat}, {\.I}lhan and {Feng}, Yu and {Moore},
		   Eric W. and {Vand erPlas}, Jake and {Laxalde}, Denis and
		   {Perktold}, Josef and {Cimrman}, Robert and {Henriksen}, Ian and
		   {Quintero}, E.~A. and {Harris}, Charles R and {Archibald}, Anne M.
		   and {Ribeiro}, Ant{\^o}nio H. and {Pedregosa}, Fabian and
		   {van Mulbregt}, Paul and {Contributors}, SciPy 1. 0},
	title = "{SciPy 1.0: Fundamental Algorithms for Scientific
		   Computing in Python}",
	journal = {Nature Methods},
	year = "2020",
	volume={17},
	pages={261--272},
	adsurl = {https://rdcu.be/b08Wh},
	doi = {https://doi.org/10.1038/s41592-019-0686-2},
}

% Pandas

@inproceedings{McKinney2010,
	author = {McKinney, Wes},
	booktitle = {Proceedings of the 9th Python in Science Conference},
	doi = {10.25080/Majora-92bf1922-00a},
	file = {::},
	pages = {56--61},
	title = {{Data Structures for Statistical Computing in Python}},
	url = {https://conference.scipy.org/proceedings/scipy2010/mckinney.html},
	year = {2010}
}

% Scikit-learn

@article{Pedregosa2011,
	author  = {Fabian Pedregosa and Ga{{\"e}}l Varoquaux and Alexandre Gramfort and Vincent Michel and Bertrand Thirion and Olivier Grisel and Mathieu Blondel and Peter Prettenhofer and Ron Weiss and Vincent Dubourg and Jake Vanderplas and Alexandre Passos and David Cournapeau and Matthieu Brucher and Matthieu Perrot and {{\'E}}douard Duchesnay},
	title   = {Scikit-learn: Machine Learning in Python},
	journal = {Journal of Machine Learning Research},
	year    = {2011},
	volume  = {12},
	number  = {85},
	pages   = {2825-2830},
	url     = {http://jmlr.org/papers/v12/pedregosa11a.html}
}

% Cython

@article{Behnel2011,
	abstract = {Cython is a Python language extension that allows explicit type declarations and is compiled directly to C. As such, it addresses Python},
	author = {Behnel, Stefan and Bradshaw, Robert and Citro, Craig and Dalcin, Lisandro and Seljebotn, Dag Sverre and Smith, Kurt},
	doi = {10.1109/MCSE.2010.118},
	issn = {1521-9615},
	journal = {Computing in Science {\&} Engineering},
	keywords = {Cython,Python,numerics,scientific computing},
	month = {mar},
	number = {2},
	pages = {31--39},
	publisher = {IEEE Computer Society},
	title = {{Cython: The Best of Both Worlds}},
	volume = {13},
	year = {2011}
}

% Mayavi

@article{Ramachandran2011,
	author = {Ramachandran, Prabhu and Varoquaux, Gael},
	doi = {10.1109/MCSE.2011.35},
	journal = {Computing in Science {\&} Engineering},
	month = {mar},
	number = {2},
	pages = {40--51},
	title = {{Mayavi: 3D Visualization of Scientific Data}},
	volume = {13},
	year = {2011}
}

% TensorFlow
% copied from https://www.tensorflow.org/about/bib

@article{Abadi2015,
title={ {TensorFlow}: Large-Scale Machine Learning on Heterogeneous Systems},
url={https://www.tensorflow.org/},
note={Software available from tensorflow.org},
author={
    Mart\'{\i}n~Abadi and
    Ashish~Agarwal and
    Paul~Barham and
    Eugene~Brevdo and
    Zhifeng~Chen and
    Craig~Citro and
    Greg~S.~Corrado and
    Andy~Davis and
    Jeffrey~Dean and
    Matthieu~Devin and
    Sanjay~Ghemawat and
    Ian~Goodfellow and
    Andrew~Harp and
    Geoffrey~Irving and
    Michael~Isard and
    Yangqing Jia and
    Rafal~Jozefowicz and
    Lukasz~Kaiser and
    Manjunath~Kudlur and
    Josh~Levenberg and
    Dandelion~Man\'{e} and
    Rajat~Monga and
    Sherry~Moore and
    Derek~Murray and
    Chris~Olah and
    Mike~Schuster and
    Jonathon~Shlens and
    Benoit~Steiner and
    Ilya~Sutskever and
    Kunal~Talwar and
    Paul~Tucker and
    Vincent~Vanhoucke and
    Vijay~Vasudevan and
    Fernanda~Vi\'{e}gas and
    Oriol~Vinyals and
    Pete~Warden and
    Martin~Wattenberg and
    Martin~Wicke and
    Yuan~Yu and
    Xiaoqiang~Zheng},
  year={2015},
}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%% HOOMD-blue %%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% HOOMD

@article{Anderson2008,
	abstract = {Graphics processing units (GPUs), originally developed for rendering real-time effects in computer games, now provide unprecedented computational power for scientific applications. In this paper, we develop a general purpose molecular dynamics code that runs entirely on a single GPU. It is shown that our GPU implementation provides a performance equivalent to that of fast 30 processor core distributed memory cluster. Our results show that GPUs already provide an inexpensive alternative to such clusters and discuss implications for the future.},
	author = {Anderson, Joshua A. and Lorenz, Chris D. and Travesset, A.},
	doi = {10.1016/J.JCP.2008.01.047},
	file = {::},
	issn = {0021-9991},
	journal = {Journal of Computational Physics},
	month = {may},
	number = {10},
	pages = {5342--5359},
	publisher = {Academic Press},
	title = {{General purpose molecular dynamics simulations fully implemented on graphics processing units}},
	volume = {227},
	year = {2008}
}

@article{Anderson2020,
	title = "HOOMD-blue: A Python package for high-performance molecular dynamics and hard particle Monte Carlo simulations",
	journal = "Computational Materials Science",
	volume = "173",
	pages = "109363",
	year = "2020",
	issn = "0927-0256",
	doi = "https://doi.org/10.1016/j.commatsci.2019.109363",
	url = "http://www.sciencedirect.com/science/article/pii/S0927025619306627",
	author = "Joshua A. Anderson and Jens Glaser and Sharon C. Glotzer",
	keywords = "Python, Molecular dynamics, Monte Carlo, Molecular simulation, GPU, CUDA",
	abstract = "HOOMD-blue is a particle simulation engine designed for nano- and colloidal-scale molecular dynamics and hard particle Monte Carlo simulations. It has been actively developed since March 2007 and available open source since August 2008. HOOMD-blue is a Python package with a high performance C++/CUDA backend that we built from the ground up for GPU acceleration. The Python interface allows users to combine HOOMD-blue with other packages in the Python ecosystem to create simulation and analysis workflows. We employ software engineering practices to develop, test, maintain, and expand the code."
}

% MPI domain decomposition

@article{Glaser2015a,
	abstract = {We describe a highly optimized implementation of MPI domain decomposition in a GPU-enabled, general-purpose molecular dynamics code, HOOMD-blue (Anderson and Glotzer, 2013). Our approach is inspired by a traditional CPU-based code, LAMMPS (Plimpton, 1995), but is implemented within a code that was designed for execution on GPUs from the start (Anderson et al., 2008). The software supports short-ranged pair force and bond force fields and achieves optimal GPU performance using an autotuning algorithm. We are able to demonstrate equivalent or superior scaling on up to 3375 GPUs in Lennard-Jones and dissipative particle dynamics (DPD) simulations of up to 108 million particles. GPUDirect RDMA capabilities in recent GPU generations provide better performance in full double precision calculations. For a representative polymer physics application, HOOMD-blue 1.0 provides an effective GPU vs. CPU node speed-up of 12.5×.},
	author = {Glaser, Jens and Nguyen, Trung Dac and Anderson, Joshua A. and Lui, Pak and Spiga, Filippo and Millan, Jaime A. and Morse, David C. and Glotzer, Sharon C.},
	doi = {10.1016/J.CPC.2015.02.028},
	file = {::},
	issn = {0010-4655},
	journal = {Computer Physics Communications},
	month = {jul},
	pages = {97--107},
	publisher = {North-Holland},
	title = {{Strong scaling of general-purpose molecular dynamics simulations on GPUs}},
	volume = {192},
	year = {2015}
}

% HPMC

@article{Anderson2016,
	abstract = {We design and implement a scalable hard particle Monte Carlo simulation toolkit (HPMC), and release it open source as part of HOOMD-blue. HPMC runs in parallel on many CPUs and many GPUs using domain decomposition. We employ BVH trees instead of cell lists on the CPU for fast performance, especially with large particle size disparity, and optimize inner loops with SIMD vector intrinsics on the CPU. Our GPU kernel proposes many trial moves in parallel on a checkerboard and uses a block-level queue to redistribute work among threads and avoid divergence. HPMC supports a wide variety of shape classes, including spheres/disks, unions of spheres, convex polygons, convex spheropolygons, concave polygons, ellipsoids/ellipses, convex polyhedra, convex spheropolyhedra, spheres cut by planes, and concave polyhedra. NVT and NPT ensembles can be run in 2D or 3D triclinic boxes. Additional integration schemes permit Frenkel-Ladd free energy computations and implicit depletant simulations. In a benchmark system of a fluid of 4096 pentagons, HPMC performs 10 million sweeps in 10 min on 96 CPU cores on XSEDE Comet. The same simulation would take 7.6 h in serial. HPMC also scales to large system sizes, and the same benchmark with 16.8 million particles runs in 1.4 h on 2048 GPUs on OLCF Titan.},
	author = {Anderson, Joshua A. and {Eric Irrgang}, M. and Glotzer, Sharon C.},
	doi = {10.1016/j.cpc.2016.02.024},
	file = {::},
	issn = {00104655},
	journal = {Computer Physics Communications},
	keywords = {GPU,Hard particle,Monte Carlo},
	month = {jul},
	pages = {21--30},
	publisher = {North-Holland},
	title = {{Scalable Metropolis Monte Carlo for simulation of hard shapes}},
	volume = {204},
	year = {2016}
}

% Rigid bodies

@article{Nguyen2011,
	abstract = {Molecular dynamics (MD) methods compute the trajectory of a system of point particles in response to a potential function by numerically integrating Newton's equations of motion. Extending these basic methods with rigid body constraints enables composite particles with complex shapes such as anisotropic nanoparticles, grains, molecules, and rigid proteins to be modeled. Rigid body constraints are added to the GPU-accelerated MD package, HOOMD-blue, version 0.10.0. The software can now simulate systems of particles, rigid bodies, or mixed systems in microcanonical (NVE), canonical (NVT), and isothermal-isobaric (NPT) ensembles. It can also apply the FIRE energy minimization technique to these systems. In this paper, we detail the massively parallel scheme that implements these algorithms and discuss how our design is tuned for the maximum possible performance. Two different case studies are included to demonstrate the performance attained, patchy spheres and tethered nanorods. In typical cases, HOOMD-blue on a single GTX 480 executes 2.5-3.6 times faster than LAMMPS executing the same simulation on any number of CPU cores in parallel. Simulations with rigid bodies may now be run with larger systems and for longer time scales on a single workstation than was previously even possible on large clusters. {\textcopyright} 2011 Elsevier B.V. All rights reserved.},
	author = {Nguyen, Trung Dac and Phillips, Carolyn L. and Anderson, Joshua A. and Glotzer, Sharon C.},
	doi = {10.1016/j.cpc.2011.06.005},
	file = {::},
	issn = {00104655},
	journal = {Computer Physics Communications},
	keywords = {CUDA,GPGPU,GPU,Molecular dynamics,Rigid body},
	month = {nov},
	number = {11},
	pages = {2307--2313},
	publisher = {North-Holland},
	title = {{Rigid body constraints realized in massively-parallel molecular dynamics on graphics processing units}},
	volume = {182},
	year = {2011}
}

@article{Glaser2020,
	title = "Pressure in rigid body molecular dynamics",
	journal = "Computational Materials Science",
	volume = "173",
	pages = "109430",
	year = "2020",
	issn = "0927-0256",
	doi = "https://doi.org/10.1016/j.commatsci.2019.109430",
	url = "http://www.sciencedirect.com/science/article/pii/S0927025619307293",
	author = "Jens Glaser and Xun Zha and Joshua A. Anderson and Sharon C. Glotzer and Alex Travesset",
	keywords = "Molecular dynamics, Pressure, Rigid bodies, Nanoparticles, SPC/E water model",
	abstract = "We present a detailed derivation of the expression for the pressure in MD simulations that contain rigid bodies, where two equivalent formulations have been developed. One of these formulations was used in HOOMD-blue v1.x, but implemented incorrectly. We point out the precise reason for this implementation issue, the difference with the current and correct implementation in HOOMD-blue v2.x, and lessons learned. We perform numerical validation tests using dumbbell models, a mixture of cubic and spherical particles, and the SPC/E water model."
}

% Intra-node scaling on multiple GPUs

@article{Glaser2020a,
	abstract = {Current supercomputer designs rely on increasing the compute density inside a node to maximize the performance of applications that tightly integrate the processors within a shared memory space. HOOMD-blue 2.5 enables molecular dynamics simulations that take advantage of multiple GPUs inside the same node which are connected via NVLINK. We describe the native implementation of CUDA unified memory in HOOMD-blue for strong scaling on this hardware, and provide performance benchmarks.},
	author = {Glaser, Jens and Schwendeman, Peter S. and Anderson, Joshua A. and Glotzer, Sharon C.},
	doi = {10.1016/j.commatsci.2019.109359},
	issn = {09270256},
	journal = {Computational Materials Science},
	keywords = {CUDA,GPUs,Molecular dynamics,NVLINK,Rigid bodies,Unified memory},
	pages = {109359},
	title = {{Unified memory in HOOMD-blue improves node-level strong scaling}},
	url = {https://linkinghub.elsevier.com/retrieve/pii/S0927025619306585},
	volume = {173},
	year = {2020}
}

% PPPM

@article{Lebard2012,
	abstract = {Due to the relatively long time scales inherent to ionic surfactant self-assembly ({\textgreater}$\mu$s), an aggressive computational approach is needed to obtain converged data on micellar solutions. This work presents a study of micellization using a coarse-grained (CG) model of aqueous ionic surfactants in replicated molecular dynamics (MD) simulations run on graphics processing unit hardware. The performance of our implementation of the CG model with electrostatics into the HOOMD-Blue GPU-accelerated MD software package is comparable to that of a modest sized cluster running a highly optimized parallel CPU code. From 0.36 ms of cumulative trajectory data, we are able to predict equilibrium thermodynamic and morphological properties of ionic surfactant micellar solutions. Estimating the critical micelle concentrations (CMC) from the free monomer ($\rho$ 1) and premicellar concentrations obtained from simulations of sodium hexyl sulfate (S6S, CMC of 460 ± 6 mM) at high (1 M) concentration, a value in good agreement with experimental results is obtained; however, the same method applied to simulations of sodium nonyl sulfate (S9S, $\rho$ 1 of 2.4 ± 0.01 mM) and sodium dodecyl sulfate (SDS, $\rho$ 1 of 0.02 ± 0.01 mM) at the same total concentration systematically underestimates the CMCs. An alternative method for calculating the CMC is presented, where the free monomer concentration computed from high concentration CG-MD data is used as the input to a simple theoretical model which can be used to extrapolate to a more accurate prediction of the CMC. Better agreement between the empirical and predicted CMC is obtained from this theory for S9S (28.7 ± 0.3 mM) and SDS (3.32 ± 0.04 mM), though the CMC for S6S is slightly underestimated (304 ± 3 mM). We also present statistically converged morphological data, including aggregation number distributions and the principal components of the gyration tensor. This data suggest a transition from spherical micelles to rod-like at a specific aggregation number, which increases with increasing hydrocarbon length. {\textcopyright} 2012 The Royal Society of Chemistry.},
	author = {Lebard, David N. and Levine, Benjamin G. and Mertmann, Philipp and Barr, Stephen A. and Jusufi, Arben and Sanders, Samantha and Klein, Michael L. and Panagiotopoulos, Athanassios Z.},
	doi = {10.1039/c1sm06787g},
	file = {::},
	issn = {1744683X},
	journal = {Soft Matter},
	month = {feb},
	number = {8},
	pages = {2385--2397},
	publisher = {The Royal Society of Chemistry},
	title = {{Self-assembly of coarse-grained ionic surfactants accelerated by graphics processing units}},
	volume = {8},
	year = {2012}
}

% DEM

@article{Spellings2017,
	abstract = {Faceted shapes, such as polyhedra, are commonly found in systems of nanoscale, colloidal, and granular particles. Many interesting physical phenomena, like crystal nucleation and growth, vacancy motion, and glassy dynamics are challenging to model in these systems because they require detailed dynamical information at the individual particle level. Within the granular materials community the Discrete Element Method has been used extensively to model systems of anisotropic particles under gravity, with friction. We provide an implementation of this method intended for simulation of hard, faceted nanoparticles, with a conservative Weeks–Chandler–Andersen (WCA) interparticle potential, coupled to a thermodynamic ensemble. This method is a natural extension of classical molecular dynamics and enables rigorous thermodynamic calculations for faceted particles.},
	archivePrefix = {arXiv},
	arxivId = {1607.02427},
	author = {Spellings, Matthew and Marson, Ryan L. and Anderson, Joshua A. and Glotzer, Sharon C.},
	doi = {10.1016/j.jcp.2017.01.014},
	eprint = {1607.02427},
	file = {::},
	issn = {10902716},
	journal = {Journal of Computational Physics},
	keywords = {Anisotropy,Discrete Element Method,GPU,Molecular dynamics},
	month = {apr},
	pages = {460--467},
	publisher = {Academic Press Inc.},
	title = {{GPU accelerated Discrete Element Method (DEM) molecular dynamics for conservative, faceted particle simulations}},
	volume = {334},
	year = {2017}
}

% Depletion

@article{Glaser2015b,
	abstract = {We present an algorithm to simulate the many-body depletion interaction between anisotropic colloids in an implicit way, integrating out the degrees of freedom of the depletants, which we treat as an ideal gas. Because the depletant particles are statistically independent and the depletion interaction is short-ranged, depletants are randomly inserted in parallel into the excluded volume surrounding a single translated and/or rotated colloid. A configurational bias scheme is used to enhance the acceptance rate. The method is validated and benchmarked both on multi-core processors and graphics processing units for the case of hard spheres, hemispheres, and discoids. With depletants, we report novel cluster phases in which hemispheres first assemble into spheres, which then form ordered hcp/fcc lattices. The method is significantly faster than any method without cluster moves and that tracks depletants explicitly, for systems of colloid packing fraction $\phi$c {\textless} 0.50, and additionally enables simulation of the fluid-solid transition.},
	author = {Glaser, Jens and Karas, Andrew S. and Glotzer, Sharon C.},
	doi = {10.1063/1.4935175},
	file = {::},
	issn = {00219606},
	journal = {Journal of Chemical Physics},
	keywords = {colloids,graphics processing units,parallel algorithms,solid-liquid transformations},
	month = {nov},
	number = {18},
	pages = {184110},
	publisher = {American Institute of Physics Inc.},
	title = {{A parallel algorithm for implicit depletant simulations}},
	url = {http://aip.scitation.org/doi/10.1063/1.4935175},
	volume = {143},
	year = {2015}
}

% Tree/stencil neighbor lists

@article{Howard2016,
	abstract = {We present an algorithm based on linear bounding volume hierarchies (LBVHs) for computing neighbor (Verlet) lists using graphics processing units (GPUs) for colloidal systems characterized by large size disparities. We compare this to a GPU implementation of the current state-of-the-art CPU algorithm based on stenciled cell lists. We report benchmarks for both neighbor list algorithms in a Lennard-Jones binary mixture with synthetic interaction range disparity and a realistic colloid solution. LBVHs outperformed the stenciled cell lists for systems with moderate or large size disparity and dilute or semidilute fractions of large particles, conditions typical of colloidal systems.},
	author = {Howard, Michael P. and Anderson, Joshua A. and Nikoubashman, Arash and Glotzer, Sharon C. and Panagiotopoulos, Athanassios Z.},
	doi = {10.1016/j.cpc.2016.02.003},
	file = {::},
	issn = {00104655},
	journal = {Computer Physics Communications},
	keywords = {Bounding volume hierarchy,Colloid,GPU,Molecular simulation,Neighbor list,Non-uniform,Size disparity},
	month = {jun},
	pages = {45--52},
	publisher = {Elsevier B.V.},
	title = {{Efficient neighbor list calculation for molecular simulation of colloidal systems using graphics processing units}},
	volume = {203},
	year = {2016}
}

% DPD

@article{Phillips2011,
	abstract = {Brownian Dynamics (BD), also known as Langevin Dynamics, and Dissipative Particle Dynamics (DPD) are implicit solvent methods commonly used in models of soft matter and biomolecular systems. The interaction of the numerous solvent particles with larger particles is coarse-grained as a Langevin thermostat is applied to individual particles or to particle pairs. The Langevin thermostat requires a pseudo-random number generator (PRNG) to generate the stochastic force applied to each particle or pair of neighboring particles during each time step in the integration of Newton's equations of motion. In a Single-Instruction-Multiple-Thread (SIMT) GPU parallel computing environment, small batches of random numbers must be generated over thousands of threads and millions of kernel calls. In this communication we introduce a one-PRNG-per-kernel-call-per-thread scheme, in which a micro-stream of pseudorandom numbers is generated in each thread and kernel call. These high quality, statistically robust micro-streams require no global memory for state storage, are more computationally efficient than other PRNG schemes in memory-bound kernels, and uniquely enable the DPD simulation method without requiring communication between threads. {\textcopyright} 2011 Elsevier Inc.},
	author = {Phillips, Carolyn L. and Anderson, Joshua A. and Glotzer, Sharon C.},
	doi = {10.1016/j.jcp.2011.05.021},
	file = {::},
	issn = {10902716},
	journal = {Journal of Computational Physics},
	keywords = {Brownian Dynamics,Dissipative Particle Dynamics,GPU,Molecular dynamics,Random number generation},
	month = {aug},
	number = {19},
	pages = {7191--7201},
	publisher = {Academic Press Inc.},
	title = {{Pseudo-random number generation for Brownian Dynamics and Dissipative Particle Dynamics simulations on GPU devices}},
	volume = {230},
	year = {2011}
}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%% signac %%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

@article{Adorf2018a,
abstract = {Researchers in the fields of materials science, chemistry, and computational physics are regularly posed with the challenge of managing large and heterogeneous data spaces. The amount of data increases in lockstep with computational efficiency multiplied by the amount of available computational resources, which shifts the bottleneck in the scientific process from data acquisition to data processing and analysis. We present a framework designed to aid in the integration of various specialized data formats, tools and workflows. The signac framework provides all basic components required to create a well-defined and thus collectively accessible and searchable data space, simplifying data access and modification through a homogeneous data interface that is largely agnostic to the data source, i.e., computation or experiment. The framework's data model is designed to not require absolute commitment to the presented implementation, simplifying adaption into existing data sets and workflows. This approach not only increases the efficiency with which scientific results can be produced, but also significantly lowers barriers for collaborations requiring shared data access.},
archivePrefix = {arXiv},
arxivId = {1611.03543},
author = {Adorf, Carl S. and Dodd, Paul M. and Ramasubramani, Vyas and Glotzer, Sharon C.},
doi = {10.1016/j.commatsci.2018.01.035},
eprint = {1611.03543},
issn = {09270256},
journal = {Computational Materials Science},
keywords = {Computational workflow,Data management,Data sharing,Database,Provenance},
month = {apr},
pages = {220--229},
publisher = {Elsevier},
title = {{Simple data and workflow management with the signac framework}},
url = {https://www.sciencedirect.com/science/article/pii/S0927025618300429},
volume = {146},
year = {2018}
}

@inproceedings{Ramasubramani2018b,
author = {Ramasubramani, Vyas and Adorf, Carl S and Dodd, Paul M and Dice, Bradley D and Glotzer, Sharon C},
booktitle = {Proceedings of the 17th Python in Science Conference},
doi = {10.25080/Majora-4af1f417-016},
editor = {Akici, Fatih and Lippa, David and Niederhut, Dillon and Pacer, M},
pages = {152--159},
title = {signac: A Python framework for data and workflow management},
year = {2018}
}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%% freud %%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% freud

@article{Ramasubramani2020,
	abstract = {The freud Python package is a powerful library for analyzing simulation data. Written with modern simulation and data analysis workflows in mind, freud provides a Python interface to fast, parallelized C++ routines that run efficiently on laptops, workstations, and supercomputing clusters. The package provides the core tools for finding particle neighbors in periodic systems, and offers a uniform API to a wide variety of methods implemented using these tools. As such, freud users can access standard methods such as the radial distribution function as well as newer, more specialized methods such as the potential of mean force and torque and local crystal environment analysis with equal ease. While many comparable tools place a heavy emphasis on reading and operating on trajectory file formats, freud instead accepts numerical arrays of data directly as inputs. By remaining agnostic to its data source, freud is suitable for analyzing any coarse-grained particle simulation, regardless of the original data representation or simulation method. When used for on-the-fly analysis in conjunction with scriptable simulation software such as HOOMD-blue, freud enables smart simulations that adapt to the current state of the system, allowing users to study phenomena such as nucleation and growth.},
	author = {Ramasubramani, Vyas and Dice, Bradley D. and Harper, Eric S. and Spellings, Matthew P. and Anderson, Joshua A. and Glotzer, Sharon C.},
	doi = {10.1016/j.cpc.2020.107275},
	file = {::},
	issn = {0010-4655},
	journal = {Computer Physics Communications},
	volume = {254},
	pages = {107275},
	publisher = {Elsevier BV},
	title = {{freud: A software suite for high throughput analysis of particle simulation data}},
	year = {2020}
}

% freud with ML or visualization

@inproceedings{Dice2019,
	abstract = {The freud Python library analyzes particle data output from molecular dynamics simulations. The library's design and its variety of highperformance methods make it a powerful tool for many modern applications. In particular, freud can be used as part of the data generation pipeline for machine learning (ML) algorithms for analyzing particle simulations, and it can be easily integrated with various simulation visualization tools for simultaneous visualization and real-time analysis. Here, we present numerous examples both of using freud to analyze nano-scale particle systems by coupling traditional simulational analyses to machine learning libraries and of visualizing per-particle quantities calculated by freud analysis methods. We include code and examples of this visualization, showing that in general the introduction of freud into existing ML and visualization workflows is smooth and unintrusive. We demonstrate that among Python packages used in the computational molecular sciences, freud offers a unique set of analysis methods with efficient computations and seamless coupling into powerful data analysis pipelines.},
	author = {Dice, Bradley and Ramasubramani, Vyas and Harper, Eric and Spellings, Matthew and Anderson, Joshua and Glotzer, Sharon},
	booktitle = {Proceedings of the 18th Python in Science Conference},
	doi = {10.25080/Majora-7ddc1dd1-004},
	keywords = {DMR 1409620,GRFP,OAC 1835612,TRI,XSEDE,analysis,computational chemistry,computational physics,molecular dynamics,particle simulation,particle system},
	mendeley-tags = {DMR 1409620,GRFP,OAC 1835612,TRI,XSEDE},
	number = {Scipy},
	pages = {27--33},
	title = {{Analyzing Particle Systems for Machine Learning and Data Visualization with freud}},
	year = {2019}
}

% Steinhardt order parameters

@article{Steinhardt1983,
	author = {Steinhardt, Paul J. and Nelson, David R and Ronchetti, Marco},
	doi = {10.1103/PhysRevB.28.784},
	file = {::},
	journal = {Physical Review B},
	keywords = {696proposal,assembly,glass,order descriptor,prelim ref,shape matching},
	mendeley-tags = {696proposal,assembly,glass,order descriptor,prelim ref,shape matching},
	number = {2},
	pages = {784--805},
	title = {{Bond-orientational order in liquids and glasses}},
	volume = {28},
	year = {1983}
}

% Neighbor-shell averaged Steinhardt order parameters

@article{Lechner2008,
	abstract = {Local bond order parameters based on spherical harmonics, also known as Steinhardt order parameters, are often used to determine crystal structures in molecular simulations. Here we propose a modification of this method in which the complex bond order vectors are averaged over the first neighbor shell of a given particle and the particle itself. As demonstrated using soft particle systems, this averaging procedure considerably improves the accuracy with which different crystal structures can be distinguished. (c) 2008 American Institute of Physics. [DOI: 10.1063/1.2977970]},
	archivePrefix = {arXiv},
	arxivId = {arXiv:0806.3345v1},
	author = {Lechner, Wolfgang and Dellago, Christoph},
	doi = {10.1063/1.2977970},
	eprint = {arXiv:0806.3345v1},
	file = {::},
	isbn = {0021-9606},
	issn = {00219606},
	journal = {Journal of Chemical Physics},
	number = {11},
	pmid = {19044980},
	title = {{Accurate determination of crystal structures based on averaged local bond order parameters}},
	volume = {129},
	year = {2008}
}

% PMFT

@article{VanAnders2014c,
	abstract = {Patchy particles are a popular paradigm for the design and synthesis of nanoparticles and colloids for self-assembly. In "traditional" patchy particles, anisotropic interactions arising from patterned coatings, functionalized molecules, DNA, and other enthalpic means create the possibility for directional binding of particles into higher-ordered structures. Although the anisotropic geometry of non-spherical particles contributes to the interaction patchiness through van der Waals, electrostatic, and other interactions, how particle shape contributes entropically to self-assembly is only now beginning to be understood. It has been recently demonstrated that, for hard shapes, entropic forces are directional. A newly proposed theoretical framework that defines and quantifies directional entropic forces demonstrates the anisotropic--that is, patchy--nature of these emergent, attractive forces. Here we introduce the notion of entropically patchy particles as the entropic counterpart to enthalpically patchy particles. Using three example "families" of shapes, we judiciously modify entropic patchiness by introducing geometric features to the particles so as to target specific crystal structures, which then assembled with Monte Carlo simulations. We quantify the emergent entropic valence via a potential of mean force and torque. We generalize these shape operations to shape anisotropy dimensions, in analogy with the anisotropy dimensions introduced for enthalpically patchy particles. Our findings demonstrate that entropic patchiness and emergent valence provide a way of engineering directional bonding into nanoparticle systems, whether in the presence or absence of additional, non-entropic forces.},
	archivePrefix = {arXiv},
	arxivId = {1304.7545},
	author = {van Anders, Greg and Ahmed, N. Khalid and Smith, Ross and Engel, Michael and Glotzer, Sharon C.},
	doi = {10.1021/nn4057353},
	eprint = {1304.7545},
	file = {::},
	isbn = {1936-0851},
	issn = {19360851},
	journal = {ACS Nano},
	keywords = {patchy particles,shape entropy,superlattices},
	number = {1},
	pages = {931--940},
	pmid = {24359081},
	title = {{Entropically patchy particles: Engineering valence through shape entropy}},
	volume = {8},
	year = {2014}
}

@article{VanAnders2014d,
	abstract = {Entropy drives the phase behavior of colloids ranging from dense suspensions of hard spheres or rods to dilute suspensions of hard spheres and depletants. Entropic ordering of anisotropic shapes into complex crystals, liquid crystals, and even quasicrystals was demonstrated recently in computer simulations and experiments. The ordering of shapes appears to arise from the emergence of directional entropic forces (DEFs) that align neighboring particles, but these forces have been neither rigorously defined nor quantified in generic systems. Here, we show quantitatively that shape drives the phase behavior of systems of anisotropic particles upon crowding through DEFs. We define DEFs in generic systems and compute them for several hard particle systems. We show they are on the order of a few times the thermal energy ([Formula: see text]) at the onset of ordering, placing DEFs on par with traditional depletion, van der Waals, and other intrinsic interactions. In experimental systems with these other interactions, we provide direct quantitative evidence that entropic effects of shape also contribute to self-assembly. We use DEFs to draw a distinction between self-assembly and packing behavior. We show that the mechanism that generates directional entropic forces is the maximization of entropy by optimizing local particle packing. We show that this mechanism occurs in a wide class of systems and we treat, in a unified way, the entropy-driven phase behavior of arbitrary shapes, incorporating the well-known works of Kirkwood, Onsager, and Asakura and Oosawa.},
	archivePrefix = {arXiv},
	arxivId = {1309.1187},
	author = {van Anders, Greg and Klotsa, Daphne and Ahmed, N. Khalid and Engel, Michael and Glotzer, Sharon C.},
	doi = {10.1073/pnas.1418159111},
	eprint = {1309.1187},
	file = {::},
	isbn = {1091-6490 (Electronic)$\backslash$r0027-8424 (Linking)},
	issn = {0027-8424},
	journal = {Proceedings of the National Academy of Sciences},
	number = {45},
	pages = {E4812--E4821},
	pmid = {25344532},
	title = {{Understanding shape entropy through local dense packing}},
	volume = {111},
	year = {2014}
}

% Environment matching

@article{Teich2019,
	abstract = {A universally accepted explanation for why liquids sometimes vitrify rather than crystallize remains hotly pursued, despite the ubiquity of glass in our everyday lives, the utilization of the glass transition in innumerable modern technologies, and nearly a century of theoretical and experimental investigation. Among the most compelling hypothesized mechanisms underlying glass formation is the development in the fluid phase of local structures that somehow prevent crystallization. Here, we explore that mechanism in the case of hard particle glasses by examining the glass transition in an extended alchemical (here, shape) space; that is, a space where particle shape is treated as a thermodynamic variable. We investigate simple systems of hard polyhedra, with no interactions aside from volume exclusion, and show via Monte Carlo simulation that glass formation in these systems arises from a multiplicity of competing local motifs, each of which is prevalent in—and predictable from—nearby ordered structures in alchemical space.},
	author = {Teich, Erin G. and van Anders, Greg and Glotzer, Sharon C.},
	doi = {10.1038/s41467-018-07977-2},
	file = {::},
	issn = {2041-1723},
	journal = {Nature Communications},
	keywords = {Glasses,Thermodynamics},
	month = {dec},
	number = {1},
	pages = {64},
	publisher = {Nature Publishing Group},
	title = {{Identity crisis in alchemical space drives the entropic colloidal glass transition}},
	volume = {10},
	year = {2019}
}

% MSD

@article{Calandrini2011,
	abstract = {This article gives an introduction into the program nMoldyn, which has been originally conceived to support the interpretation of neutron scattering experiments on complex molecular systems by the calculation of appropriate time correlation functions from classical and quantum molecular dynamics simulations of corresponding model systems. Later the functionality has been extended to include more advanced time series analyses of molecular dynamics trajectories, in particular the calculation of memory functions, which play an essential role in the theory of time correlation functions. Here we present a synoptic view of the range of applications of the latest version of nMoldyn, which includes new modules for a simulation-based interpretation of data from nuclear magnetic resonance spectroscopy, far infrared spectroscopy and for protein secondary structure analysis.},
	author = {Calandrini, V. and Pellegrini, E. and Calligari, P. and Hinsen, K. and Kneller, G.R.},
	doi = {10.1051/sfn/201112010},
	file = {::},
	issn = {2107-7223},
	journal = {{\'{E}}cole th{\'{e}}matique de la Soci{\'{e}}t{\'{e}} Fran{\c{c}}aise de la Neutronique},
	month = {jun},
	pages = {201--232},
	publisher = {EDP Sciences},
	title = {{nMoldyn - Interfacing spectroscopic experiments, molecular dynamics simulations and models for time correlation functions}},
	volume = {12},
	year = {2011}
}

% Voronoi

@techreport{Rycroft2009,
	address = {Berkeley, CA},
	author = {Rycroft, Chris},
	doi = {10.2172/946741},
	institution = {Lawrence Berkeley National Laboratory (LBNL)},
	month = {jan},
	title = {{Voro++: a three-dimensional Voronoi cell library in C++}},
	year = {2009}
}

% Cubatic order parameter

@article{Haji-Akbari2015,
	abstract = {Recent advancements in the synthesis of anisotropic macromolecules and nanoparticles have spurred an immense interest in theoretical and computational studies of self-assembly. The cor-nerstone of these studies is the role of shape in self-assembly and in inducing complex order. One challenge is these studies is to quantify different types of order that can emerge in these systems. Here, we revisit the problem of quantifying orientational order in systems of building blocks with non-trivial rotational symmetries. We propose tensorial strong orientational coordinates that fully and exclusively describe the orientation of symmetric objects, and use those to describe and quan-tify local and global rotational order, as well as spatiotemporal correlations in rotational order. These order parameters are not only useful in performing and analyzing computer simulations of anisotropic building blocks, but can also be used for efficient storage of rotational information in long trajectories of molecular simulations.},
	archivePrefix = {arXiv},
	arxivId = {arXiv:1507.02249v1},
	author = {Haji-Akbari, Amir and Glotzer, Sharon C.},
	doi = {10.1088/1751-8113/48/48/485201},
	eprint = {arXiv:1507.02249v1},
	file = {::},
	issn = {17518121},
	journal = {Journal of Physics A: Mathematical and Theoretical},
	keywords = {computer simulations,orientational order,self-assembly},
	number = {48},
	pages = {485201},
	publisher = {IOP Publishing},
	title = {{Strong orientational coordinates and orientational order parameters for symmetric objects}},
	volume = {48},
	year = {2015}
}

% Rotational autocorrelation

@article{Karas2019,
	abstract = {Plastic crystals --- like liquid crystals --- are mesophases that can exist between liquids and crystals and possess some of the characteristic traits of each of these states of matter....},
	author = {Karas, Andrew S and Dshemuchadse, Julia and van Anders, Greg and Glotzer, Sharon C},
	doi = {10.1039/C8SM02643B},
	file = {::},
	issn = {1744-683X},
	journal = {Soft Matter},
	publisher = {The Royal Society of Chemistry},
	title = {{Phase behavior and design rules for plastic colloidal crystals of hard polyhedra via consideration of directional entropic forces}},
	year = {2019}
}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%% rowan %%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

@article{Ramasubramani2018,
	doi = {10.21105/joss.00787},
	url = {https://doi.org/10.21105/joss.00787},
	year = {2018},
	publisher = {The Open Journal},
	volume = {3},
	number = {27},
	pages = {787},
	author = {Vyas Ramasubramani and Sharon Glotzer},
	title = {rowan: A Python package for working with quaternions},
	journal = {Journal of Open Source Software}
}



%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%% Misc Other %%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

@article{Stukowski2010,
	abstract = {The Open Visualization Tool (OVITO) is a new 3D visualization software designed for post-processing atomistic data obtained from molecular dynamics or Monte Carlo simulations. Unique analysis, editing and animations functions are integrated into its easy-to-use graphical user interface. The software is written in object-oriented C++, controllable via Python scripts and easily extendable through a plug-in interface. It is distributed as open-source software and can be downloaded from the website http://ovito.sourceforge.net/. {\textcopyright} 2010 IOP Publishing Ltd.},
	author = {Stukowski, Alexander},
	doi = {10.1088/0965-0393/18/1/015012},
	file = {:Users/vramasub/Library/Application Support/Mendeley Desktop/Downloaded/Stukowski - 2010 - Visualization and analysis of atomistic simulation data with OVITO-the Open Visualization Tool.pdf:pdf},
	issn = {09650393},
	journal = {Modelling and Simulation in Materials Science and Engineering},
	number = {1},
	title = {{Visualization and analysis of atomistic simulation data with OVITO-the Open Visualization Tool}},
	volume = {18},
	year = {2010}
}

GitHub Events

Total
Last Year

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 10
  • Total Committers: 3
  • Avg Commits per committer: 3.333
  • Development Distribution Score (DDS): 0.5
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Vyas Ramasubramani v****b@u****u 5
Bradley Dice b****e@b****m 4
Joshua A. Anderson j****r@u****u 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 9 months ago

All Time
  • Total issues: 0
  • Total pull requests: 3
  • Average time to close issues: N/A
  • Average time to close pull requests: 1 day
  • Total issue authors: 0
  • Total pull request authors: 2
  • Average comments per issue: 0
  • Average comments per pull request: 0.67
  • Merged pull requests: 3
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
  • bdice (2)
  • joaander (1)
Top Labels
Issue Labels
Pull Request Labels