crystal-ball-public
Code and data for Crystal Ball model.
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
○.zenodo.json file
-
✓DOI references
Found 2 DOI reference(s) in README -
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (15.9%) to scientific vocabulary
Repository
Code and data for Crystal Ball model.
Basic Info
- Host: GitHub
- Owner: AidanBachmann
- Language: Python
- Default Branch: main
- Size: 31.5 MB
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 2
Metadata Files
README.md
Overview
This code was developed for the Crystal Ball (CB) model, detailed here: https://doi.org/10.1088/1361-648X/adc0d7. The data reported in this paper can be found in Data.zip. CB is an N-body code for simulating the dynamics of lattice structures composed of particles confined to the surface of a sphere which interact only through Coulomb coupling. The code has two primary functions: to generate lattices, and to study the dynamics of these lattices once formed. For $N$ particles of the same species, we are interested in the configuration on the sphere which minimizes the Coulomb potential. This is known as the Thomson Problem (see here: https://en.wikipedia.org/wiki/Thomsonproblem). Typically, global optimization methods are used to identify global minima candidates. In our case, we treat the problem as a dynamcis problem and push the system to equilibrium using a first order integrator with an adaptive time step. A damping term is artificially imposed to ensure convergence. We find that a number of our lattices agree within 1 atomic unit of energy with candidates identified by Wales et al. in their 2006 and 2009 papers, which is sufficient convergence for our purposes. With these lattices formed, we were particularly interested in phase transitions induced by removing particles from the lattice. We aimed to characterize the average peak kinetic energy any one particle achieved as a function of $N$ and the number of particles removed $N{rm}$. To this end, we remove $N_{rm}$ particles from our pre-formed lattices, taking the system out of equilibrium. We then employ the same integrator and evolve the system without damping, tracking the time evolution of the particle kinetic energies, as well as system total energy to ensure sufficient energy conservation.
This method can be extended to Coulomb coupled, $N$ body systems on parametric surfaces.
Dependencies
This code depends on numpy, pandas, and numba, so ensure that you have installed these dependencies before attempting to run the code. In particular, we use the numba just-in-time compiler to improve computation times.
Usage
Maininit.py controls the generation of lattices and stores the configuration in a text file while MainSim.py is used for simulating phase transitions. All functions called by the main files are defined in the FunctionsInit and FunctionsSim files, respectively, both of which have two versions with the suffix FAST or SLOW. The only difference between the FAST and SLOW versions of the code is how much information is stored. In the SLOW version, particle position, velocity, and acceleration are stored for every time step. This means that the code initializes an $N\times N{t}\times 9$ matrix. In our simulations, $N\sim 10^{2}$, while $N{t}\sim 10^{4}$, so this required a significant amount of memory and greatly increased computation time. So, the FAST version of the code only saves the state for time steps $n$ and $n+1$. If one is only interested in the final state of the system, or in scanning for the peak kinetic energy, then this is the most efficient way of doing so. However, the SLOW version of the code must be used for creating visualizations of a simulation. The slow version is also essential for evaluating energy conservation. At the top of each main, a boolean flag is set to determine which version of the code should be used. This information is also printed to the terminal at the beginning of the simulation. In its current form, the code does not support command line arguments.
All simulation parameters, including particle mass, particle charge, sphere radius, number of time steps, initial time step, max time step, seed for the random number generator, and number of Monte Carlo simulations to run in parallel, are set in the main files. This last parameter should be set no higher than the number of CPUs available. Additionally, MainInit and MainSim both have some additional, distinct parameters. For MainInit, we define a damping parameter and the number of nearest neighbors used for averaging the lattice constant. For MainSim, the particle index defining which particle should be removed can be set. Both mains also have a set of boolean flags which turn on and off a number of features. These include, disabling/enabling a debugger, generating plots for a simulation, plotting quantities on log scale, and saving plots.
All data is stored in a directory named "zDatai". When you download the code, the directory "zData1" is already defined with all of the sub-directories required. If you wish to run a new set of simulations with different lattices, simply create a new directory, "zData2", with the same sub-directories and change the data_dir variable in the main.
To run a phase transition simulation, one first needs to generate a set of lattices, so MainInit.py must be run first. In the array Narr, the set of N values you would like to generate lattices for is defined. Once set, one simply needs to run python3 MainInit.py from the terminal. Depending on the flags set in the main, the code will output the lattices in text files labeled by $N$ and an index indicating which parallel run the output is associated with. These files are saved in the "Lattices" directory of zDatai. Additional output, including particle energy plots and histograms of the distrubtion of average distances between nearest neighbors, is saved in the "Output Init" directory. The set of lattices used in our paper are available for use in Data.zip.
To run a simulation, in MainSim, define the same array of $N$ values used in MainInit. Also, define an array of the number of particles to remove for each simulation. The code will read the lattices from the Lattices directory, remove $N{rm}$ particles centered around the user defined particle index, and evolve the system for $N{t}$ time steps. The peak kinetic energy is saved in a text file in the "Scaling Data" directory. Similar to Main_Init, auxiliary outputs, such as particle energy time series plots, are saved in "Output Sim".
To create a movie of your simulation, use the SLOW version of the code and set the generatePlots flag to True. For each time steps (or a subset of timesteps), the code plots all particle positions using the Mercator projection. A three dimensional plotting function is also defined, though no option exists to toggle between these two modes. This is because the Mercator plots are much simpler to interpret. A simple python script can be used to generate a a gif from the output, however FFmpeg is the best option.
A set of analysis files are also included with the code. 3D_Plot.py creates three dimensional visualizations of a configuration of charges (such as the figure above). Voronoi.py creates a Voronoi construction of a particular lattice. For benchmarking purposes, this code can also read the Wales et. al minima files, available publicly on the Cambridge Cluster Database (see https://www-wales.ch.cam.ac.uk/~wales/CCD/Thomson/table.html and https://www-wales.ch.cam.ac.uk/~wales/CCD/Thomson2/table.html). Thomson.py computes the total energy of a set of lattices, which is useful for comparison against the Cambridge Cluster Database. Scaling.py fits the scaling relation to the data from the phase transition simulations. It outputs both the 2D scaling surface and curves of constant $N$ of the surface.
Code Optimization
This code is by no means optimal. It has been partially vectorized and it has been parallelized to the extent that it is possible to run identical simulations (with unique seeds) on an arbitary number of cores. The only limit is the number of CPUs one has access to. Part of what makes the slow versions of both codes as slow as they are is indexing through huge arrays with $N_{t}\sim 10^{4}$ and $N\sim 10^{3}$. This could certainly be improved by only saving some subset of the time steps, for instance 1 in every 100. The motivation for using the slow code is to generate energy time series plots, as well as spatial plots via the Mercator projection. As it stands, time steps are so small that we require frame rates on the order of 200 fps to see any meaningful motion anyways. Thus, nothing is really lost by simply saving fewer data points. This change may be implemented if we continue working on the project.
Additionally, it would be useful to add a checkpoint file system so that we can may initialize new simulations from the final (or intermediate) state of a previous simulation. This may be added in later versions. We may also develop a C++ version, which would certainly improve computation times.
As it stands, a simulation (fast version of the code) with $N\sim 10^{3}$ and $N_{t}\sim 6\times 10^{4}$ takes around 7 hours to run locally on an M1 Max.
Owner
- Login: AidanBachmann
- Kind: user
- Repositories: 1
- Profile: https://github.com/AidanBachmann
Citation (CITATION.cff)
cff-version: 1.0.0
message: "If you use this software, please cite it as below."
authors:
- family-names: Bachmann
given-names: Aidan
orcid: https://orcid.org/0000-0002-9470-6404
title: "Crystal Ball Model"
version: 1.1.0
identifiers:
- type: doi
value: 10.5281/zenodo.15116551
date-released: 2024-10-13
GitHub Events
Total
- Release event: 1
- Push event: 40
- Create event: 4
Last Year
- Release event: 1
- Push event: 40
- Create event: 4