drills

DRiLLS: Deep Reinforcement Learning for Logic Synthesis Optimization (ASPDAC'20)

https://github.com/scale-lab/drills

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org, ieee.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.7%) to scientific vocabulary

Keywords

abc aspdac deep-learning drills logic-synthesis machine-learning reinforcement-learning
Last synced: 6 months ago · JSON representation ·

Repository

DRiLLS: Deep Reinforcement Learning for Logic Synthesis Optimization (ASPDAC'20)

Basic Info
  • Host: GitHub
  • Owner: scale-lab
  • License: bsd-3-clause
  • Language: Python
  • Default Branch: master
  • Homepage:
  • Size: 418 KB
Statistics
  • Stars: 112
  • Watchers: 9
  • Forks: 35
  • Open Issues: 4
  • Releases: 0
Topics
abc aspdac deep-learning drills logic-synthesis machine-learning reinforcement-learning
Created about 7 years ago · Last pushed almost 3 years ago
Metadata Files
Readme License Citation

README.md

DRiLLS

Deep Reinforcement Learning for Logic Synthesis Optimization

Abstract

Logic synthesis requires extensive tuning of the synthesis optimization flow where the quality of results (QoR) depends on the sequence of opti-mizations used. Efficient design space exploration ischallenging due to the exponential number of possible optimization permutations. Therefore, automating the optimization process is necessary. In this work, we propose a novel reinforcement learning-based methodology that navigates the optimization space without human intervention. We demonstrate the training of an Advantage Actor Critic (A2C) agent that seeks to minimize area subject to a timing constraint. Using the proposed framework, designs can be optimized autonomously with no-humans in-loop.

Paper

DRiLLS has been presented at ASP-DAC 2020 and the manuscript is available on IEEE Xplore. A pre-print version is available on arXiv.

Setup

DRiLLS requires Python 3.6, pip3 and virtualenv installed on the system.

  1. virtualenv .venv --python=python3
  2. source .venv/bin/activate
  3. pip install -r requirements.txt

:warning: WARNING :warning:

Since TensorFlow 2.x is not compatible with TensorFlow 1.x, this implementation is tested only on Python 3.6. If you have a newer version of Python, pip won't be able to find tensorflow==1.x.

Run the agent

  1. Edit params.yml file. Comments in the file illustrate the individual fields.
  2. Run python drills.py train scl

For help, python drills.py -help

How It Works

There are two major components in DRiLLS framework:

  • Logic Synthesis environment: a setup of the design space exploration problem as a reinforcement learning task. The logic synthesis environment is implemented as a session in drills/scl_session.py and drills/fpga_session.py.
  • Reinforcement Learning environment: it employs an Advantage Actor Critic agent (A2C) to navigate the environment searching for the best optimization at a given state. It is implemented in drills/model.py and uses drills/features.py to extract AIG features.

DRiLLS agent exploring the design space of Max design.

For more details on the inner-workings of the framework, see Section 4 in the paper.

Reporting Bugs

Please, use ISSUETEMPLATE/bugreport.md to create an issue and describe your bug.

Contributing

Below is a list of suggested contributions you can make. Before you work on any, it is advised that you create an issue using the ISSUE_TEMPLATE/contribution.md to tell us what you plan to work on. This ensures that your work can be merged to the master branch in a timely manner.

Modernize Tensorflow Implementation

Google has recently released Dopamine which sets up a framework for researching reinforcement learning algorithms. A new version of DRiLLS would adopt Dopamine to make it easier to implement the model and session classes. If you are new to Dopamine and want to try it on a real use case, it would be a great fit for DRiLLS and will add a great value to our repository.

Better Integration

The current implementation interacts with the logic synthesis environment using files. This affects the run time of the agent training as it tries to extract features and statistics through files. A better integrations keeps a session of yosys and abc where the design is loaded once in the beginning and the feature extraction (and results extraction) are retrieved through this open session.

Study An Enhanced Model

The goal is to enhance the model architecture used in [drills/model.py]. An enhancement should give better results (less area AND meets timing constraints): * Deeper network architecure. * Changing gamma rate. * Changing learning rate. * Improve normalization.

Citation

@INPROCEEDINGS{9045559, author={A. {Hosny} and S. {Hashemi} and M. {Shalan} and S. {Reda}}, booktitle={2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC)}, title={DRiLLS: Deep Reinforcement Learning for Logic Synthesis}, year={2020}, volume={}, number={}, pages={581-586},}

License

BSD 3-Clause License. See LICENSE file

Owner

  • Name: Brown University Scale Lab
  • Login: scale-lab
  • Kind: organization
  • Location: Brown University

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If this code helped your research, please cite it as below."
authors:
- family-names: "Hosny"
  given-names: "Abdelrahman"
  orcid: "https://orcid.org/0000-0003-4020-7973"
- family-names: "Hashemi"
  given-names: "Soheil"
- family-names: "Shalan"
  given-names: "Mohamed"
- family-names: "Reda"
  given-names: "Sherief"  
title: "DRiLLS: Deep Reinforcement Learning for Logic Synthesis"
version: 1.0.0
doi: 10.1109/ASP-DAC47756.2020.9045559
date-released: 2019-11-11
url: "https://github.com/scale-lab/DRiLLS"
preferred-citation:
  type: article
  authors:
  - family-names: "Hosny"
    given-names: "Abdelrahman"
    orcid: "https://orcid.org/0000-0003-4020-7973"
  - family-names: "Hashemi"
    given-names: "Soheil"
  - family-names: "Shalan"
    given-names: "Mohamed"
  - family-names: "Reda"
    given-names: "Sherief" 
  doi: "10.1109/ASP-DAC47756.2020.9045559"
  journal: "2020 25th Asia and South Pacific Design Automation Conference (ASP-DAC)"
  month: 9
  start: 581 # First page number
  end: 586 # Last page number
  title: "DRiLLS: Deep Reinforcement Learning for Logic Synthesis"
  year: 2020

GitHub Events

Total
  • Issues event: 2
  • Watch event: 14
  • Fork event: 4
Last Year
  • Issues event: 2
  • Watch event: 14
  • Fork event: 4

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 1
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 1
  • Total pull request authors: 0
  • Average comments per issue: 0.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 1
  • Pull request authors: 0
  • Average comments per issue: 0.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • gongyuzhangzhao (2)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

requirements.txt pypi
  • Keras-Applications ==1.0.8
  • Keras-Preprocessing ==1.1.0
  • Markdown ==3.1.1
  • PyYAML ==5.1.2
  • Werkzeug ==0.16.0
  • absl-py ==0.8.1
  • astor ==0.8.0
  • gast ==0.2.2
  • google-pasta ==0.1.7
  • grpcio ==1.24.3
  • h5py ==2.10.0
  • joblib ==0.14.0
  • numpy ==1.17.2
  • opt-einsum ==3.1.0
  • protobuf ==3.10.0
  • pyfiglet ==0.8.post1
  • six ==1.12.0
  • tensorflow ==1.12.0
  • termcolor ==1.1.0
  • wrapt ==1.11.2