https://github.com/1qb-information-technologies/cool

Controlled Online Optimization Learning (COOL): Finding the Ground State of Spin Hamiltonians with Reinforcement Learning (arXiv:2003.00011)

https://github.com/1qb-information-technologies/cool

Science Score: 10.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org, zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (15.5%) to scientific vocabulary
Last synced: 6 months ago · JSON representation

Repository

Controlled Online Optimization Learning (COOL): Finding the Ground State of Spin Hamiltonians with Reinforcement Learning (arXiv:2003.00011)

Basic Info
  • Host: GitHub
  • Owner: 1QB-Information-Technologies
  • License: gpl-3.0
  • Language: C
  • Default Branch: master
  • Size: 1.33 MB
Statistics
  • Stars: 9
  • Watchers: 3
  • Forks: 4
  • Open Issues: 1
  • Releases: 0
Created almost 6 years ago · Last pushed over 5 years ago
Metadata Files
Readme License

README.md

Controlled Online Optimization Learning (COOL): Finding the ground state of spin Hamiltonians with reinforcement learning

COOL

Building

You must compile the Simulated Annealing backend (written in C++) before using the gym environment.

These instructions were made using a fresh Ubuntu 18.04 installation, with Anaconda 2020.02 (Python 3.7) installed. Anaconda can be obtained from https://www.anaconda.com/distribution/.

Linux (Debian)

Install dependencies and build the Python interface to the Simulated Annealing backend:

bash apt install swig g++ libtclap-dev libboost-dev make install

Mac OS X

This package should work on a Mac, but dependencies are more difficult to satisfy. You will probably need to edit setup.py to manually point to your boost include paths, tclap, etc. On Linux it just works (with apt).

Examples

Example experiments relevant to the COOL manuscript are included in experiments/. The README examples for these experiments use the COOL_HOME environment variable. It's not necessary, but helpful to define this. This is the top level directory of this repository.

bash export COOL_HOME=/home/user/git/COOL/

If you wish to run the examples (reinforcement learning code), you will need to install the Python dependencies.

bash pip install stable-baselines conda install tensorflow-gpu=1.13

Research paper:

arXiv link

Note:

environment.yml is a list of the developer's Python environment used during development. It contains more software than is required, but can be manually consulted for package version numbers, should the need arise. Most users will not need this file.

DOI

Owner

  • Name: 1QBit
  • Login: 1QB-Information-Technologies
  • Kind: organization
  • Location: Vancouver, BC

GitHub Events

Total
  • Watch event: 1
Last Year
  • Watch event: 1

Dependencies

environment.yml pypi
  • absl-py ==0.7.1
  • argparse ==1.4.0
  • astor ==0.8.0
  • atari-py ==0.1.15
  • bleach ==3.1.0
  • chardet ==3.0.4
  • cloudpickle ==1.2.1
  • cycler ==0.10.0
  • docutils ==0.14
  • future ==0.17.1
  • gast ==0.2.2
  • grpcio ==1.21.1
  • gym ==0.12.5
  • idna ==2.8
  • inflect ==2.1.0
  • joblib ==0.13.2
  • kiwisolver ==1.1.0
  • matplotlib ==3.1.0
  • mpi4py ==3.0.2
  • networkx ==2.3
  • opencv-python ==4.1.0.25
  • pandas ==0.24.2
  • pillow ==6.0.0
  • pip ==19.1.1
  • pkginfo ==1.5.0.1
  • progressbar ==2.5
  • protobuf ==3.8.0
  • pyglet ==1.3.2
  • pyparsing ==2.4.0
  • pytz ==2019.1
  • readme-renderer ==24.0
  • regex ==2019.6.8
  • requests ==2.22.0
  • requests-toolbelt ==0.9.1
  • scipy ==1.3.0
  • six ==1.12.0
  • termcolor ==1.1.0
  • tqdm ==4.32.1
  • twine ==1.13.0
  • urllib3 ==1.25.3
  • webencodings ==0.5.1
setup.py pypi
  • gym *
  • numpy *