https://github.com/aida-ugent/nrl4lp

Instructions for replicating the experiments in the paper "Benchmarking Network Embedding Models for Link Prediction: Are We Making Progress?" (DSAA2020)

https://github.com/aida-ugent/nrl4lp

Science Score: 23.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, ieee.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.4%) to scientific vocabulary

Keywords

benchmark evaluation link-prediction network-embedding representation-learning
Last synced: 5 months ago · JSON representation

Repository

Instructions for replicating the experiments in the paper "Benchmarking Network Embedding Models for Link Prediction: Are We Making Progress?" (DSAA2020)

Basic Info
Statistics
  • Stars: 1
  • Watchers: 1
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Topics
benchmark evaluation link-prediction network-embedding representation-learning
Created over 5 years ago · Last pushed almost 5 years ago
Metadata Files
Readme

README.md

Benchmarking Network Embedding Models for Link Prediction: Are We Making Progress?

This repository contains the instructions and materials necessary for reproducing the experiments presented in the paper: Benchmarking Network Embedding Models for Link Prediction: Are We Making Progress?

The repository is maintained by Alexandru Mara (alexandru.mara@ugent.be).

Reproducing Experiments

In order to reproduce the experiments presented in the paper the following steps are necessary:

  1. Download and install the EvalNE library v0.3.2 as instructed by the authors here
  2. Download and install the implementations of the baseline methods reported in the manuscript. We recommend that each method is installed in a unique virtual environment to ensure that the right dependencies are used.
  3. Download the datasets used in the experiments:
* [StudentDB](http://adrem.ua.ac.be/smurfig)
* [Facebook](https://snap.stanford.edu/data/egonets-Facebook.html)
* [BlogCatalog](http://socialcomputing.asu.edu/datasets/BlogCatalog3) 
* [Flickr](http://socialcomputing.asu.edu/datasets/Flickr)
* [YouTube](http://socialcomputing.asu.edu/datasets/YouTube2)
* [GR-QC](https://snap.stanford.edu/data/ca-GrQc.html)
* [DBLP](https://snap.stanford.edu/data/com-DBLP.html)
* [PPI](http://snap.stanford.edu/node2vec/#datasets)
* [Wikipedia](http://snap.stanford.edu/node2vec/#datasets)
  1. Modify the .ini configuration files from this folder to match the paths where the datasets are stored on your system as well as the paths where the methods are installed. Run the evaluation as:

    bash python -m evalne ./experiments/expLP1.ini

NOTE: In order to obtain the results for, e.g. different values of the embedding dimensionality, the conf file expLP1.ini has to be modified accordingly and the previous command rerun.

NOTE: For AROPE, VERSE and the GEM library, special main.py files are required in order to run the evaluation through EvalNE. Once these methods are installed, the corresponding main file has to be added to the root folder of the method and called from the .ini configuration file. These main.py files are located in a main_files folder.

Citation

If you have found our research useful, please consider citing our paper, which is also available on arxiv:

bibtex @INPROCEEDINGS{9260030, author={A. C. {Mara} and J. {Lijffijt} and T. d. {Bie}}, booktitle={2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA)}, title={Benchmarking Network Embedding Models for Link Prediction: Are We Making Progress?}, year={2020}, pages={138-147}, doi={10.1109/DSAA49011.2020.00026}}

Owner

  • Name: Ghent University Artificial Intelligence & Data Analytics Group
  • Login: aida-ugent
  • Kind: organization
  • Email: tijl.debie@ugent.be
  • Location: Ghent

GitHub Events

Total
Last Year

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels