coarsegrained-md-neural-ode

Thesis repository on neural ordinary differential equations used for coarse-graining molecular dynamics

https://github.com/jakublala/coarsegrained-md-neural-ode

Science Score: 26.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.7%) to scientific vocabulary

Keywords

coarse-grained-molecular-dynamics differential-equations molecular-dynamics neural-ode scientific-machine-learning thesis toy-problem
Last synced: 6 months ago · JSON representation

Repository

Thesis repository on neural ordinary differential equations used for coarse-graining molecular dynamics

Basic Info
  • Host: GitHub
  • Owner: jakublala
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 170 MB
Statistics
  • Stars: 2
  • Watchers: 1
  • Forks: 2
  • Open Issues: 0
  • Releases: 0
Topics
coarse-grained-molecular-dynamics differential-equations molecular-dynamics neural-ode scientific-machine-learning thesis toy-problem
Created almost 4 years ago · Last pushed about 3 years ago
Metadata Files
Readme Citation

README.md

Coarse-Graining of Molecular Dynamics Using Neural ODEs

This project is done by Jakub Lála under the supervision of Stefano Angiolleti-Uberti in the SoftNanoLab at Imperial College London. It started off as a part of my Master's thesis, but the research and development is currently ongoing. We utilise the state-of-the-art deep learning method of neural ordinary differential equations (neural ODEs) to learn coarse-grained (CG) machine learning (ML) potentials for any molecule, nanoparticle, etc. By leveraging the idea of updating many parameters at once when one learns on a dynamical trajectory rather than frozen time snapshots of configuration-energy pairs, we aim to develop an automated coarse-graining pipeline that produces computationally cheaper ML potentials, compared to running all-particle simulations of complex molecules and nanoparticles at the atomistic resolution.

Theoretical Background

By considering a complex, composite body made up of many particles as a rigid body with a single centre of mass and an orientation, one tremendously reduces the amount of degrees of freedom that need to be simulated, thus theoretically decreasing the computational demand. github_test

How to Use

To run the code, one has to first get comfortable with the Trainer class that forms the basis of all training, including testing and validating. It takes a config dictionary input that consists of the following elements:

  • folder: relative path to the folder with the datasets
  • load_folder: relative path to a pre-trained model.pt file
  • device: device to train on
  • dtype: datatype for all tensors

  • epochs: number of training epochs

  • start_epoch: number of the starting epoch (useful when re-training a model)

  • nn_depth: depth of the neural net

  • nn_width: width of the neural net

  • batch_length: trajectory length used for training

  • eval_batch_length: trajectory length used for model performance evaluation

  • batch_size: number of trajectories in a single batch

  • shuffle: if set to True, the dataloader shuffles the trajectory order in the dataset during training

  • num_workers: number of workers used by dataloader

  • optimizer = name of the optimizer (e.g. Adam)

  • learning_rate: initial learning rate

  • scheduler: name of the scheduler (e.g. LambdaLR)

  • scheduling_factor: scheduling factor determining the rate of scheduling

  • loss_func: type of loss function

    • all: absolute mean difference of the entire trajectory
    • final: absolute mean difference of the final state in the trajectory
  • itr_printing_freq: frequency of printing for iterations in an epoch

  • printing_freq: frequency of printing for epochs

  • plotting_freq: frequency of plotting for epochs

  • stopping_freq: frequency of early stopping for epochs (e.g. due to non-convergent loss)

  • scheduling_freq: frequency of scheduling the learning rate for epochs

  • evaluation_freq: frequency of evaluating the model on the test dataset

  • checkpoint_freq: frequency of saving a checkpoint of the model

For an example run file look at run-example.py.

Collaboration

If you are eager to help out with this project, I am more than happy to get you on board. There is a lot of small fixes and optimization things that could be improved. Also, if you want to test out this code on your own simulations, I am excited to help you out, as it would also allow us to properly benchmark and test out this method

Ideally you should create your own fork and then for each feature added make a new branch. Once you are happy for review, you can submit a pull request and we can discuss the changes/improvements. Either way, you can contact me at jakublala@gmail.com so that we can be in touch and figure out the details.

Owner

  • Name: Jakub Lála
  • Login: jakublala
  • Kind: user

GitHub Events

Total
  • Watch event: 2
Last Year
  • Watch event: 2

Committers

Last synced: over 1 year ago

All Time
  • Total Commits: 276
  • Total Committers: 2
  • Avg Commits per committer: 138.0
  • Development Distribution Score (DDS): 0.058
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
jakublala j****a@g****m 260
Jakub Lála 6****a 16

Issues and Pull Requests

Last synced: 11 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels