https://github.com/atomicarchitects/equiformer_v2

[ICLR 2024] EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations

https://github.com/atomicarchitects/equiformer_v2

Science Score: 36.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.6%) to scientific vocabulary

Keywords

ai-for-science catalyst-design computational-chemistry computational-physics deep-learning drug-discovery e3nn equivariant-graph-neural-network equivariant-neural-networks force-fields geometric-deep-learning graph-neural-networks interatomic-potentials machine-learning materials-science molecular-dynamics pytorch
Last synced: 5 months ago · JSON representation

Repository

[ICLR 2024] EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations

Basic Info
Statistics
  • Stars: 254
  • Watchers: 4
  • Forks: 35
  • Open Issues: 17
  • Releases: 0
Topics
ai-for-science catalyst-design computational-chemistry computational-physics deep-learning drug-discovery e3nn equivariant-graph-neural-network equivariant-neural-networks force-fields geometric-deep-learning graph-neural-networks interatomic-potentials machine-learning materials-science molecular-dynamics pytorch
Created over 2 years ago · Last pushed about 1 year ago
Metadata Files
Readme License

README.md

EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations

Paper | OpenReview | Poster

This repository contains the official PyTorch implementation of the work "EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations" (ICLR 2024). We provide the code for training the base model setting on the OC20 S2EF-2M and S2EF-All+MD datasets.

Additionally, EquiformerV2 has been incorporated into OCP repository and used in Open Catalyst demo.

In our subsequent work, we find that we can generalize self-supervised learning similar to BERT, which we call DeNS (Denoising Non-Equilibrium Structures), to 3D atomistic systems to improve the performance of EquiformerV2 on energy and force predictions. Please refer to the paper and the code for further details.

photo not available

photo not available

photo not available

photo not available

photo not available

Content

  1. Environment Setup
  2. Changelog
  3. Training
  4. File Structure
  5. Checkpoints
  6. Citation
  7. Acknowledgement

Environment Setup

Environment

See here for setting up the environment.

OC20

The OC20 S2EF dataset can be downloaded by following instructions in their GitHub repository.

For example, we can download the OC20 S2EF-2M dataset by running: cd ocp python scripts/download_data.py --task s2ef --split "2M" --num-workers 8 --ref-energy We also need to download the "val_id" data split to run training.

After downloading, place the datasets under datasets/oc20/ by using ln -s: cd datasets mkdir oc20 cd oc20 ln -s ~/ocp/data/s2ef s2ef

To train on different splits like All and All+MD, we can follow the same link above to download the datasets.

Changelog

Please refer to here.

Training

OC20

  1. We train EquiformerV2 on the OC20 S2EF-2M dataset by running:

    bash sh scripts/train/oc20/s2ef/equiformer_v2/equiformer_v2_N@12_L@6_M@2_splits@2M_g@multi-nodes.sh The above script uses 2 nodes with 8 GPUs on each node.

    If there is an import error, it is possible that ocp/ocpmodels/common/utils.py is not modified. Please follow here for details.

    We can also run training on 8 GPUs on 1 node: bash sh scripts/train/oc20/s2ef/equiformer_v2/equiformer_v2_N@12_L@6_M@2_splits@2M_g@8.sh

  2. We train EquiformerV2 (153M) on OC20 S2EF-All+MD by running: bash sh scripts/train/oc20/s2ef/equiformer_v2/equiformer_v2_N@20_L@6_M@3_splits@all+md_g@multi-nodes.sh The above script uses 16 nodes with 8 GPUs on each node.

  3. We train EquiformerV2 (31M) on OC20 S2EF-All+MD by running: bash sh scripts/train/oc20/s2ef/equiformer_v2/equiformer_v2_N@8_L@4_M@2_splits@all+md_g@multi-nodes.sh The above script uses 8 nodes with 8 GPUs on each node.

  4. We can train EquiformerV2 with DeNS (Denoising Non-Equilibrium Structures) as an auxiliary task to further improve the performance on energy and force predictions. Please refer to the code for details.

File Structure

  1. nets includes code of different network architectures for OC20.
  2. scripts includes scripts for training models on OC20.
  3. main_oc20.py is the code for training, evaluating and running relaxation.
  4. oc20/trainer contains code for the force trainer as well as some utility functions.
  5. oc20/configs contains config files for S2EF.

Checkpoints

We provide the checkpoints of EquiformerV2 trained on S2EF-2M dataset for 30 epochs, EquiformerV2 (31M) trained on S2EF-All+MD, and EquiformerV2 (153M) trained on S2EF-All+MD. |Model |Split |Download |val force MAE (meV / Å) |val energy MAE (meV) | |--- |--- |--- |--- |--- | |EquiformerV2 |2M |checkpoint | config |19.4 | 278 | |EquiformerV2 (31M)|All+MD |checkpoint | config |16.3 | 232 | |EquiformerV2 (153M) |All+MD | checkpoint | config |15.0 | 227 |

Citation

Please consider citing the works below if this repository is helpful:

  • EquiformerV2: bibtex @inproceedings{ equiformer_v2, title={{EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations}}, author={Yi-Lun Liao and Brandon Wood and Abhishek Das* and Tess Smidt*}, booktitle={International Conference on Learning Representations (ICLR)}, year={2024}, url={https://openreview.net/forum?id=mCOBKZmrzD} }

  • eSCN: bibtex @inproceedings{ escn, title={{Reducing SO(3) Convolutions to SO(2) for Efficient Equivariant GNNs}}, author={Passaro, Saro and Zitnick, C Lawrence}, booktitle={International Conference on Machine Learning (ICML)}, year={2023} }

  • Equiformer: bibtex @inproceedings{ equiformer, title={{Equiformer: Equivariant Graph Attention Transformer for 3D Atomistic Graphs}}, author={Yi-Lun Liao and Tess Smidt}, booktitle={International Conference on Learning Representations (ICLR)}, year={2023}, url={https://openreview.net/forum?id=KwmPfARgOTD} }

  • OC20 dataset: bibtex @article{ oc20, author = {Chanussot*, Lowik and Das*, Abhishek and Goyal*, Siddharth and Lavril*, Thibaut and Shuaibi*, Muhammed and Riviere, Morgane and Tran, Kevin and Heras-Domingo, Javier and Ho, Caleb and Hu, Weihua and Palizhati, Aini and Sriram, Anuroop and Wood, Brandon and Yoon, Junwoong and Parikh, Devi and Zitnick, C. Lawrence and Ulissi, Zachary}, title = {{Open Catalyst 2020 (OC20) Dataset and Community Challenges}}, journal = {ACS Catalysis}, year = {2021}, doi = {10.1021/acscatal.0c04525}, }

Please direct questions to Yi-Lun Liao (ylliao@mit.edu).

Acknowledgement

Our implementation is based on PyTorch, PyG, e3nn, timm, ocp, Equiformer.

Owner

  • Name: The Atomic Architects
  • Login: atomicarchitects
  • Kind: organization
  • Location: United States of America

Research Group of Prof. Tess Smidt

GitHub Events

Total
  • Issues event: 10
  • Watch event: 75
  • Issue comment event: 16
  • Push event: 1
  • Fork event: 11
Last Year
  • Issues event: 10
  • Watch event: 75
  • Issue comment event: 16
  • Push event: 1
  • Fork event: 11

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 4
  • Total pull requests: 1
  • Average time to close issues: 6 days
  • Average time to close pull requests: N/A
  • Total issue authors: 4
  • Total pull request authors: 1
  • Average comments per issue: 0.75
  • Average comments per pull request: 0.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 4
  • Pull requests: 1
  • Average time to close issues: 6 days
  • Average time to close pull requests: N/A
  • Issue authors: 4
  • Pull request authors: 1
  • Average comments per issue: 0.75
  • Average comments per pull request: 0.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • patriksimurka (1)
  • yxwang1215 (1)
  • xiaolinpan (1)
  • fmocking (1)
  • feyhong1112 (1)
  • liyy2 (1)
  • TommyDzh (1)
  • Garhorne0813 (1)
  • QuantumLab-ZY (1)
  • liangzhixin-202169 (1)
  • YutackPark (1)
  • psp3dcg (1)
Pull Request Authors
  • TommyDzh (1)
  • hhh846 (1)
  • chaitjo (1)
Top Labels
Issue Labels
Pull Request Labels