https://github.com/axect/neural_hamilton

Official implementation of the paper "Neural Hamilton: Can A.I. Understand Hamiltonian Mechanics?"

https://github.com/axect/neural_hamilton

Science Score: 36.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.7%) to scientific vocabulary

Keywords

classical-mechanics deep-learning hamiltonian-dynamics machine-learning neural-operator operator-learning
Last synced: 6 months ago · JSON representation

Repository

Official implementation of the paper "Neural Hamilton: Can A.I. Understand Hamiltonian Mechanics?"

Basic Info
Statistics
  • Stars: 12
  • Watchers: 2
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
classical-mechanics deep-learning hamiltonian-dynamics machine-learning neural-operator operator-learning
Created over 1 year ago · Last pushed 6 months ago
Metadata Files
Readme License

README.md

Neural Hamilton

arXiv

This repository contains the official implementation of the paper "Neural Hamilton: Can A.I. Understand Hamiltonian Mechanics?"

Overview

Neural Hamilton reformulates Hamilton's equations as an operator learning problem, exploring whether artificial intelligence can grasp the principles of Hamiltonian mechanics without explicitly solving differential equations. The project introduces new neural network architectures specifically designed for operator learning in Hamiltonian systems.

Key features: - Novel algorithm for generating physically plausible potential functions using Gaussian Random Fields and cubic B-splines - Multiple neural network architectures (DeepONet, TraONet, VaRONet, MambONet) for solving Hamilton's equations - Comparison with traditional numerical methods (Yoshida 4th order, Runge-Kutta 4th order) - Performance evaluation on various physical potentials (harmonic oscillators, double-well potentials, Morse potentials)

Installation

Prerequisites

Setup

  1. Clone the repository: bash git clone https://github.com/Axect/Neural_Hamilton cd Neural_Hamilton

  2. (Recommended) Setup all dependencies and generate all data in once using just: bash just all

Training Models

The main training script can be run with different dataset sizes: bash python main.py --data normal --run_config configs/deeponet_run_optimized.yaml # 10,000 potentials python main.py --data more --run_config configs/deeponet_run_optimized.yaml # 100,000 potentials

For hyperparameter optimization: bash python main.py --data normal --run_config configs/deeponet_run.yaml --optimize_config configs/deeponet_tpe_full.yaml --device="cuda:0"

Analyzing Results

To analyze trained models: bash python analyze.py

The script provides options to: - Evaluate model performance on test datasets - Generate visualizations of potential functions and trajectories - Compare performance with RK4 numerical solutions

Model Architectures

  1. DeepONet: Baseline neural operator model (config example: configs/deeponet_run.yaml) yaml net_config: nodes: 128 layers: 3 branches: 10

  2. VaRONet: Variational Recurrent Operator Network (config example: configs/varonet_run.yaml) yaml net_config: hidden_size: 512 num_layers: 4 latent_size: 30 dropout: 0.0 kl_weight: 0.1

  3. TraONet: Transformer Operator Network (config example: configs/traonet_run.yaml) yaml net_config: d_model: 64 nhead: 8 num_layers: 3 dim_feedforward: 512 dropout: 0.0

  4. MambONet: Mamba Operator Network (config example: configs/mambonet_run.yaml) yaml net_config: d_model: 128 num_layers1: 4 n_head: 4 num_layers2: 4 d_ff: 1024

Key Results

  1. Performance Comparison:

    • MambONet consistently outperforms other architectures and RK4
    • Models show improved performance with larger training datasets
    • Neural approaches maintain accuracy over longer time periods compared to RK4
  2. Computation Time:

    • TraONet demonstrates fastest computation time
    • MambONet and DeepONet show comparable speeds to RK4
    • VaRONet requires more computational resources
  3. Physical Potential Tests:

    • Superior performance on Simple Harmonic Oscillator, Double Well, and Morse potentials
    • Successful extrapolation to non-differentiable potentials (Mirrored Free Fall)
    • Improved accuracy on smoothed variants (Softened Mirrored Free Fall)

Citation

If you use this code in your research, please cite: bibtex @misc{kim2024neuralhamiltonaiunderstand, title={Neural Hamilton: Can A.I. Understand Hamiltonian Mechanics?}, author={Tae-Geun Kim and Seong Chan Park}, year={2024}, eprint={2410.20951}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2410.20951}, }

License

MIT License

Acknowledgments

This project uses code from the following repositories:

  • mamba.py - Implementation of Mamba and parallel scan used in MambONet
  • HyperbolicLR - Implementation of ExpHyperbolicLR scheduler
  • SPlus - Implementation of SPlus optimizer

Owner

  • Name: Tae-Geun Kim
  • Login: Axect
  • Kind: user
  • Location: Seoul, South Korea
  • Company: Yonsei Univ.

Ph.D student of particle physics & Rustacean

GitHub Events

Total
  • Watch event: 12
  • Public event: 1
  • Push event: 314
Last Year
  • Watch event: 12
  • Public event: 1
  • Push event: 314

Issues and Pull Requests

Last synced: about 1 year ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

Cargo.lock cargo
  • 146 dependencies
Cargo.toml cargo
requirements.txt pypi
  • alembic ==1.13.2
  • beaupy ==3.9.2
  • certifi ==2024.8.30
  • charset-normalizer ==3.3.2
  • click ==8.1.7
  • colorlog ==6.8.2
  • contourpy ==1.3.0
  • cycler ==0.12.1
  • docker-pycreds ==0.4.0
  • emoji ==2.13.2
  • filelock ==3.15.4
  • fonttools ==4.53.1
  • fsspec ==2024.6.1
  • gitdb ==4.0.11
  • gitpython ==3.1.43
  • greenlet ==3.0.3
  • idna ==3.8
  • jinja2 ==3.1.4
  • kiwisolver ==1.4.5
  • mako ==1.3.5
  • markdown-it-py ==3.0.0
  • markupsafe ==2.1.5
  • matplotlib ==3.9.2
  • mdurl ==0.1.2
  • mpmath ==1.3.0
  • networkx ==3.3
  • numpy ==2.1.0
  • nvidia-cublas-cu12 ==12.1.3.1
  • nvidia-cuda-cupti-cu12 ==12.1.105
  • nvidia-cuda-nvrtc-cu12 ==12.1.105
  • nvidia-cuda-runtime-cu12 ==12.1.105
  • nvidia-cudnn-cu12 ==9.1.0.70
  • nvidia-cufft-cu12 ==11.0.2.54
  • nvidia-curand-cu12 ==10.3.2.106
  • nvidia-cusolver-cu12 ==11.4.5.107
  • nvidia-cusparse-cu12 ==12.1.0.106
  • nvidia-nccl-cu12 ==2.20.5
  • nvidia-nvjitlink-cu12 ==12.6.68
  • nvidia-nvtx-cu12 ==12.1.105
  • optuna ==3.6.1
  • packaging ==24.1
  • pillow ==10.4.0
  • platformdirs ==4.2.2
  • polars ==1.6.0
  • protobuf ==5.28.0
  • psutil ==6.0.0
  • pygments ==2.18.0
  • pyparsing ==3.1.4
  • python-dateutil ==2.9.0.post0
  • python-yakh ==0.3.2
  • pyyaml ==6.0.2
  • questo ==0.3.0
  • requests ==2.32.3
  • rich ==13.8.1
  • scienceplots ==2.1.1
  • sentry-sdk ==2.13.0
  • setproctitle ==1.3.3
  • setuptools ==74.0.0
  • six ==1.16.0
  • smmap ==5.0.1
  • sqlalchemy ==2.0.32
  • survey ==5.4.0
  • sympy ==1.13.2
  • torch ==2.4.0
  • tqdm ==4.66.5
  • triton ==3.0.0
  • typing-extensions ==4.12.2
  • urllib3 ==2.2.2
  • wandb ==0.17.8