torch-sim-atomistic

Torch-native, batchable, atomistic simulations.

https://github.com/radical-ai/torch-sim

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org, zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.0%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Torch-native, batchable, atomistic simulations.

Basic Info
Statistics
  • Stars: 274
  • Watchers: 8
  • Forks: 38
  • Open Issues: 23
  • Releases: 5
Created about 1 year ago · Last pushed 7 months ago
Metadata Files
Readme Changelog Contributing License Citation

README.md

TorchSim

CI codecov This project supports Python 3.11+ PyPI Zenodo

TorchSim is a next-generation open-source atomistic simulation engine for the MLIP era. By rewriting the core primitives of atomistic simulation in Pytorch, it allows orders of magnitude acceleration of popular machine learning potentials.

  • Automatic batching and GPU memory management allowing significant simulation speedup
  • Support for MACE, Fairchem, SevenNet, ORB, MatterSim, graph-pes, and metatomic MLIP models
  • Support for classical lennard jones, morse, and soft-sphere potentials
  • Molecular dynamics integration schemes like NVE, NVT Langevin, and NPT Langevin
  • Relaxation of atomic positions and cell with gradient descent and FIRE
  • Swap monte carlo and hybrid swap monte carlo algorithm
  • An extensible binary trajectory writing format with support for arbitrary properties
  • A simple and intuitive high-level API for new users
  • Integration with ASE, Pymatgen, and Phonopy
  • and more: differentiable simulation, elastic properties, custom workflows...

Quick Start

Here is a quick demonstration of many of the core features of TorchSim: native support for GPUs, MLIP models, ASE integration, simple API, autobatching, and trajectory reporting, all in under 40 lines of code.

Running batched MD

```py import torch import torch_sim as ts

run natively on gpus

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

easily load the model from mace-mp

from mace.calculators.foundationsmodels import macemp from torchsim.models.mace import MaceModel mace = macemp(model="small", returnrawmodel=True) mace_model = MaceModel(model=mace, device=device)

from ase.build import bulk cuatoms = bulk("Cu", "fcc", a=3.58, cubic=True).repeat((2, 2, 2)) manycuatoms = [cuatoms] * 50 trajectoryfiles = [f"Cutraj{i}.h5md" for i in range(len(manycu_atoms))]

run them all simultaneously with batching

finalstate = ts.integrate( system=manycuatoms, model=macemodel, nsteps=50, timestep=0.002, temperature=1000, integrator=ts.integrators.nvtlangevin, trajectoryreporter=dict(filenames=trajectoryfiles, statefrequency=10), ) finalatomslist = finalstate.to_atoms()

extract the final energy from the trajectory file

finalenergies = [] for filename in trajectoryfiles: with ts.TorchSimTrajectory(filename) as traj: finalenergies.append(traj.getarray("potential_energy")[-1])

print(final_energies) ```

Running batched relaxation

To then relax those structures with FIRE is just a few more lines.

```py

relax all of the high temperature states

relaxedstate = ts.optimize( system=finalstate, model=macemodel, optimizer=ts.frechetcell_fire, autobatcher=True, )

print(relaxed_state.energy) ```

Speedup

TorchSim achieves up to 100x speedup compared to ASE with popular MLIPs.

Speedup comparison

This figure compares the time per atom of ASE and torch_sim. Time per atom is defined as the number of atoms / total time. While ASE can only run a single system of n_atoms (on the $x$ axis), torch_sim can run as many systems as will fit in memory. On an H100 80 GB card, the max atoms that could fit in memory was ~8,000 for EGIP, ~10,000 for MACE-MPA-0, ~22,000 for Mattersim V1 1M, ~2,500 for SevenNet, and ~9000 for PET-MAD. This metric describes model performance by capturing speed and memory usage simultaneously.

Installation

PyPI Installation

sh pip install torch-sim-atomistic

Installing from source

sh git clone https://github.com/radical-ai/torch-sim cd torch-sim pip install .

Examples

To understand how TorchSim works, start with the comprehensive tutorials in the documentation.

Core Modules

TorchSim's package structure is summarized in the API reference documentation and drawn as a treemap below.

TorchSim package treemap

License

TorchSim is released under an MIT license.

Citation

+If you use TorchSim in your research, please cite the arXiv preprint.

Owner

  • Name: Radical AI
  • Login: Radical-AI
  • Kind: organization

Citation (citation.cff)

cff-version: 1.2.0
title: TorchSim
message: If you use this software, please cite it as below.
authors:
  - family-names: Gangan
    given-names: Abhijeet S.
  - family-names: Cohen
    given-names: Orion Archer
  - family-names: Riebesell
    given-names: Janosh
  - family-names: Goodall
    given-names: Rhys
  - family-names: Kolluru
    given-names: Adeesh
  - family-names: Falletta
    given-names: Stefano
license: MIT
license-url: https://github.com/Radical-AI/torch-sim/blob/main/LICENSE
repository-code: https://github.com/Radical-AI/torch-sim
url: https://github.com/Radical-AI/torch-sim
type: software
date-released: 2025-04-02

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 25
  • Total pull requests: 72
  • Average time to close issues: 9 days
  • Average time to close pull requests: 2 days
  • Total issue authors: 16
  • Total pull request authors: 15
  • Average comments per issue: 0.68
  • Average comments per pull request: 1.25
  • Merged pull requests: 45
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 25
  • Pull requests: 72
  • Average time to close issues: 9 days
  • Average time to close pull requests: 2 days
  • Issue authors: 16
  • Pull request authors: 15
  • Average comments per issue: 0.68
  • Average comments per pull request: 1.25
  • Merged pull requests: 45
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • orionarcher (7)
  • CompRhys (4)
  • YutackPark (3)
  • hongmoxian (2)
  • janosh (2)
  • ryanliu30 (2)
  • zdcao121 (1)
  • ZKC19940412 (1)
  • hn-yu (1)
  • zhang1045343477 (1)
  • jla-gardner (1)
  • t-reents (1)
  • rohan335 (1)
  • Seungwoo-Hwang (1)
  • stefanbringuier (1)
Pull Request Authors
  • orionarcher (22)
  • curtischong (17)
  • janosh (16)
  • CompRhys (10)
  • AdeeshKolluru (4)
  • mstapelberg (3)
  • Luthaf (3)
  • t-reents (3)
  • abhijeetgangan (2)
  • ryanliu30 (2)
  • stefanbringuier (2)
  • YutackPark (2)
  • frostedoyster (1)
  • jla-gardner (1)
  • zaporter (1)
Top Labels
Issue Labels
bug (8) enhancement (5) refactor (4) geo-opt (3) breaking (1) fix (1) ux (1) docs (1) ci (1) ecosystem (1) md (1)
Pull Request Labels
cla-signed (70) breaking (7) tests (5) fix (5) ecosystem (4) geo-opt (4) ci (4) docs (4) enhancement (3) pkg (2) feature (2) ux (2) refactor (1) examples (1) models (1) lint (1) md (1) types (1) bug (1)

Packages

  • Total packages: 3
  • Total downloads:
    • pypi 359,087 last-month
  • Total dependent packages: 0
    (may contain duplicates)
  • Total dependent repositories: 0
    (may contain duplicates)
  • Total versions: 15
  • Total maintainers: 3
proxy.golang.org: github.com/Radical-AI/torch-sim
  • Versions: 5
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 5.4%
Average: 5.6%
Dependent repos count: 5.8%
Last synced: 6 months ago
proxy.golang.org: github.com/radical-ai/torch-sim
  • Versions: 5
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 5.4%
Average: 5.6%
Dependent repos count: 5.8%
Last synced: 6 months ago
pypi.org: torch-sim-atomistic

A pytorch toolkit for calculating material properties using MLIPs

  • Documentation: https://torch-sim-atomistic.readthedocs.io/
  • License: The MIT License (MIT) Copyright 2025 Radical AI Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
  • Latest release: 0.3.0
    published 7 months ago
  • Versions: 5
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 359,087 Last month
Rankings
Dependent packages count: 9.0%
Downloads: 19.0%
Average: 26.3%
Dependent repos count: 50.8%
Maintainers (3)
Last synced: 6 months ago

Dependencies

.github/workflows/docs.yml actions
  • actions/checkout v4 composite
  • actions/deploy-pages v4 composite
  • actions/setup-python v5 composite
  • actions/upload-pages-artifact v3 composite
  • astral-sh/setup-uv v2 composite
.github/workflows/link-check.yml actions
  • actions/checkout v4 composite
  • gaurav-nelson/github-action-markdown-link-check v1 composite
.github/workflows/lint.yml actions
  • actions/checkout v4 composite
  • actions/setup-python v5 composite
.github/workflows/test.yml actions
  • actions/checkout v4 composite
  • actions/setup-python v5 composite
  • astral-sh/setup-uv v2 composite
  • codecov/codecov-action v5 composite
pyproject.toml pypi