deep-q-reinforcement-learning-for-quantum-circuit-compilation-from-scratch

Educational implementation of a deep Q-learning agent for quantum circuit compilation task

https://github.com/marcinplodzien/deep-q-reinforcement-learning-for-quantum-circuit-compilation-from-scratch

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (7.9%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Educational implementation of a deep Q-learning agent for quantum circuit compilation task

Basic Info
  • Host: GitHub
  • Owner: MarcinPlodzien
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 103 KB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created almost 2 years ago · Last pushed almost 2 years ago
Metadata Files
Readme License Citation

README.md

Deep-Q-Reinforcement-learning-for-quantum-circuit-compilation-from-Scratch

Simple and straightforward educational implementation of a Deep-Q-learning and Double-Deep-Q-learning Agent for quantum circuit compilation task.

Code contains: 1. Implementation of the Environment representing quantum state of L-qubit system with set of actions given by predefined quantum gates. State of the environment corresponds to quantum state of L-qubits.

  1. Implementation of the Agent acting on the environment via application of a given quantum gate. Agent applies a chosen gate on the current state of the environment and obtain a reward. The reward is "+1" if fidelity between target state and the state of the environment increases, "-1" when decreases, and "0" when doesn't change. The reward can also be defined as a fidelity between current state and the target state, or change in the fidelity after taken action.

  2. Q-table is implemented as a simple feed-forward neural network.

The code runs set of episodes during which the Agent learns. History of each episode is collected in the Pandas dataframe. From dataframe one can extract the optimal set of gates implementing unitary matrix transforming the initial state into the target state with the high fidelity.

In principle, the set of avaiable gates can be arbitrary, thus you can consider connectivity restrictions. Code can be simply extended to gates with finite fidelity, allowing designing circuits for a target hardware platform.

To learn about theoretical foundations of Reinforcement Learning for Quantum Technologies I strongly suggest our Book "Modern Applications of Machine Learning in Quantum Sciences": https://arxiv.org/abs/2204.04198

Examples: 1. Preparing 4-qubit |GHZ> state starting from |0000> state with Deep-Q-Learning agent with Hadamard and CZ gates image

  1. Preparing 3-qubit |GHZ> state starting from |000> state with Double-Deep-Q-Learning agent with Hadamard and CNOT gates image

If you use this code for work that you publish, please cite this repository.

Owner

  • Name: Marcin Plodzien
  • Login: MarcinPlodzien
  • Kind: user
  • Location: Barcelona
  • Company: ICFO

Marcin Płodzień is a theoretical physicist working in the field of many-body systems with a focus on quantum simulators and deep learning for quantum mechanic.

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Płodzień"
  given-names: "Marcin"
  orcid: "https://orcid.org/0000-0002-0835-1644"
title: "Deep-Q-Reinforcement-Learning-for-quantum-circuit-compilation-from-Scratch"
date-released: 2024-04-27
url: "https://github.com/MarcinPlodzien/Deep-Q-Reinforcement-learning-for-quantum-state-compilation-from-Scratch"

GitHub Events

Total
Last Year