bottleneck
Code for the paper: "On the Bottleneck of Graph Neural Networks and Its Practical Implications"
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (8.8%) to scientific vocabulary
Keywords
Repository
Code for the paper: "On the Bottleneck of Graph Neural Networks and Its Practical Implications"
Basic Info
Statistics
- Stars: 94
- Watchers: 5
- Forks: 22
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
On the Bottleneck of Graph Neural Networks and its Practical Implications
This is the official implementation of the paper: On the Bottleneck of Graph Neural Networks and its Practical Implications (ICLR'2021), which introduces the over-squashing problem of GNNs.
By Uri Alon and Eran Yahav. See also the [video], [poster] and [slides].
this repository is divided into three sub-projects:
- The subdirectory
tf-gnn-samplesis a clone of https://github.com/microsoft/tf-gnn-samples by Brockschmidt (ICML'2020). This project can be used to reproduce the QM9 and VarMisuse experiments of Section 4.2 and 4.2 in the paper. This sub-project depends on TensorFlow 1.13. The instructions for our clone are the same as their original code, except that reproducing our experiments (the QM9 dataset and VarMisuse) can be done by running the scripttf-gnn-samples/run_qm9_benchs_fa.pyortf-gnn-samples/run_varmisuse_benchs_fa.pyinstead of their original scripts. For additional dependencies and instructions, see their original README: https://github.com/microsoft/tf-gnn-samples/blob/master/README.md. The main modification that we performed is using a Fully-Adjacent layer as the last GNN layer and we describe in our paper. - The subdirectory
gnn-comparisonis a clone of https://github.com/diningphil/gnn-comparison by Errica et al. (ICLR'2020). This project can be used to reproduce the biological experiments (Section 4.3, the ENZYMES and NCI1 datasets). This sub-project depends on PyTorch 1.4 and Pytorch-Geometric. For additional dependencies and instructions, see their original README: https://github.com/diningphil/gnn-comparison/blob/master/README.md. The instructions for our clone are the same, except that we added an additional flag to everyconfig_*.ymlfile, calledlast_layer_fa, which is set toTrueby default, and reproduces our experiments. The main modification that we performed is using a Fully-Adjacent layer as the last GNN layer. - The main directory (in which this file resides) can be used to reproduce the experiments of Section 4.1 in the paper, for the "Tree-NeighborsMatch" problem. The rest of this README file includes the instructions for this main directory. This repository can be used to reproduce the experiments of
This project was designed to be useful in experimenting with new GNN architectures and new solutions for the over-squashing problem.
Feel free to open an issue with any questions.
The Tree-NeighborsMatch problem

Requirements
Dependencies
This project is based on PyTorch 1.4.0 and the PyTorch Geometric library.
* First, install PyTorch from the official website: https://pytorch.org/.
* Then install PyTorch Geometric: https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html
* Eventually, run the following to verify that all dependencies are satisfied:
setup
pip install -r requirements.txt
The requirements.txt file lists the additional requirements.
However, PyTorch Geometric might requires manual installation, and we thus recommend to use the
requirements.txt file only afterward.
Verify that importing the dependencies goes without errors:
python -c 'import torch; import torch_geometric'
Hardware
Training on large trees (depth=8) might require ~60GB of RAM and about 10GB of GPU memory.
GPU memory can be compromised by using a smaller batch size and using the --accum_grad flag.
For example, instead of running:
python main.py --batch_size 1024 --type GGNN
The following uses gradient accumulation, and takes less GPU memory:
python main.py --batch_size 512 --accum_grad 2 --type GGNN
Reproducing Experiments
To run a single experiment from the paper, run:
python main.py --help
And see the available flags.
For example, to train a GGNN with depth=4, run:
python main.py --task DICTIONARY --eval_every 1000 --depth 4 --num_layers 5 --batch_size 1024 --type GGNN
To train a GNN across all depths, run one of the following:
python run-gcn-2-8.py
python run-gat-2-8.py
python run-ggnn-2-8.py
python run-gin-2-8.py
Results
The results of running the above scripts are (Section 4.1 in the paper):

r: | 2 | 3 | 4 | 5 | 6 | 7 | 8 | ------ |----- |----- |------ |------ |------ |------ |------ | GGNN | 1.0 | 1.0 | 1.0 | 0.60 | 0.38 | 0.21 | 0.16 | GAT | 1.0 | 1.0 | 1.0 | 0.41 | 0.21 | 0.15 | 0.11 | GIN | 1.0 | 1.0 | 0.77 | 0.29 | 0.20 | | | GCN | 1.0 | 1.0 | 0.70 | 0.19 | 0.14 | 0.09 | 0.08 |
Experiment with other GNN types
To experiment with other GNN types:
* Add the new GNN type to the GNN_TYPE enum here, for example: MY_NEW_TYPE = auto()
* Add another elif self is GNN_TYPE.MY_NEW_TYPE: to instantiate the new GNN type object here
* Use the new type as a flag for the main.py file:
python main.py --type MY_NEW_TYPE ...
Citation
If you want to cite this work, please use this bibtex entry:
@inproceedings{
alon2021on,
title={On the Bottleneck of Graph Neural Networks and its Practical Implications},
author={Uri Alon and Eran Yahav},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=i80OPhOCVH2}
}
Owner
- Name: tech-srl
- Login: tech-srl
- Kind: organization
- Repositories: 25
- Profile: https://github.com/tech-srl
Citation (CITATION.cff)
@inproceedings{
alon2021on,
title={On the Bottleneck of Graph Neural Networks and its Practical Implications},
author={Uri Alon and Eran Yahav},
booktitle={International Conference on Learning Representations},
year={2021},
url={https://openreview.net/forum?id=i80OPhOCVH2}
}
GitHub Events
Total
- Watch event: 4
- Fork event: 1
Last Year
- Watch event: 4
- Fork event: 1
Issues and Pull Requests
Last synced: 8 months ago
All Time
- Total issues: 7
- Total pull requests: 1
- Average time to close issues: about 1 month
- Average time to close pull requests: less than a minute
- Total issue authors: 7
- Total pull request authors: 1
- Average comments per issue: 3.0
- Average comments per pull request: 0.0
- Merged pull requests: 1
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- jhonygiraldo (1)
- mengliu1998 (1)
- SteveTanggithub (1)
- UnBuen (1)
- MGwave (1)
- Barcavin (1)
- shangqing-liu (1)
Pull Request Authors
- urialon (1)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- networkx *
- pyyaml *
- requests *
- torch *
- torch_cluster *
- torch_geometric *
- torch_scatter *
- torch_sparse *
- attrdict ==2.0.1
- sklearn *
- torch >=1.4.0
- torch-geometric >=1.4.2
- torch-scatter >=2.0.4
- torch-sparse >=0.6.0
- torchvision >=0.5.0
- docopt *
- dpu-utils >=0.1.30
- numpy *
- tensorflow-gpu >=1.13.1