coarsegrained-md-neural-ode
Thesis repository on neural ordinary differential equations used for coarse-graining molecular dynamics
Science Score: 26.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.7%) to scientific vocabulary
Keywords
Repository
Thesis repository on neural ordinary differential equations used for coarse-graining molecular dynamics
Basic Info
Statistics
- Stars: 2
- Watchers: 1
- Forks: 2
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
Coarse-Graining of Molecular Dynamics Using Neural ODEs
This project is done by Jakub Lála under the supervision of Stefano Angiolleti-Uberti in the SoftNanoLab at Imperial College London. It started off as a part of my Master's thesis, but the research and development is currently ongoing. We utilise the state-of-the-art deep learning method of neural ordinary differential equations (neural ODEs) to learn coarse-grained (CG) machine learning (ML) potentials for any molecule, nanoparticle, etc. By leveraging the idea of updating many parameters at once when one learns on a dynamical trajectory rather than frozen time snapshots of configuration-energy pairs, we aim to develop an automated coarse-graining pipeline that produces computationally cheaper ML potentials, compared to running all-particle simulations of complex molecules and nanoparticles at the atomistic resolution.
Theoretical Background
By considering a complex, composite body made up of many particles as a rigid body with a single centre of mass and an orientation, one tremendously reduces the amount of degrees of freedom that need to be simulated, thus theoretically decreasing the computational demand.

How to Use
To run the code, one has to first get comfortable with the Trainer class that forms the basis of all training, including testing and validating. It takes a config dictionary input that consists of the following elements:
folder: relative path to the folder with the datasetsload_folder: relative path to a pre-trainedmodel.ptfiledevice: device to train ondtype: datatype for all tensorsepochs: number of training epochsstart_epoch: number of the starting epoch (useful when re-training a model)nn_depth: depth of the neural netnn_width: width of the neural netbatch_length: trajectory length used for trainingeval_batch_length: trajectory length used for model performance evaluationbatch_size: number of trajectories in a single batchshuffle: if set toTrue, the dataloader shuffles the trajectory order in the dataset during trainingnum_workers: number of workers used by dataloaderoptimizer= name of the optimizer (e.g.Adam)learning_rate: initial learning ratescheduler: name of the scheduler (e.g.LambdaLR)scheduling_factor: scheduling factor determining the rate of schedulingloss_func: type of loss functionall: absolute mean difference of the entire trajectoryfinal: absolute mean difference of the final state in the trajectory
itr_printing_freq: frequency of printing for iterations in an epochprinting_freq: frequency of printing for epochsplotting_freq: frequency of plotting for epochsstopping_freq: frequency of early stopping for epochs (e.g. due to non-convergent loss)scheduling_freq: frequency of scheduling the learning rate for epochsevaluation_freq: frequency of evaluating the model on the test datasetcheckpoint_freq: frequency of saving a checkpoint of the model
For an example run file look at run-example.py.
Collaboration
If you are eager to help out with this project, I am more than happy to get you on board. There is a lot of small fixes and optimization things that could be improved. Also, if you want to test out this code on your own simulations, I am excited to help you out, as it would also allow us to properly benchmark and test out this method
Ideally you should create your own fork and then for each feature added make a new branch. Once you are happy for review, you can submit a pull request and we can discuss the changes/improvements. Either way, you can contact me at jakublala@gmail.com so that we can be in touch and figure out the details.
Owner
- Name: Jakub Lála
- Login: jakublala
- Kind: user
- Repositories: 2
- Profile: https://github.com/jakublala
GitHub Events
Total
- Watch event: 2
Last Year
- Watch event: 2
Committers
Last synced: over 1 year ago
Top Committers
| Name | Commits | |
|---|---|---|
| jakublala | j****a@g****m | 260 |
| Jakub Lála | 6****a | 16 |
Issues and Pull Requests
Last synced: 11 months ago
All Time
- Total issues: 0
- Total pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 0
- Total pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0