Science Score: 52.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
    Organization mit-emze has institutional domain (emze.csail.mit.edu)
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.4%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: mit-emze
  • License: mit
  • Language: Jupyter Notebook
  • Default Branch: main
  • Size: 7.06 MB
Statistics
  • Stars: 64
  • Watchers: 2
  • Forks: 26
  • Open Issues: 1
  • Releases: 0
Created almost 2 years ago · Last pushed 7 months ago
Metadata Files
Readme License Citation

README.md

CiMLoop (Timeloop+Accelergy v4)

Welcome to the repository for "CiMLoop: A Flexible, Accurate, and Fast Compute-In-Memory Modeling Tool" by Tanner Andrulis, Vivienne Sze, and Joel S. Emer. CiMLoop is a full-stack CiM modeling tool with flexible user-defined systems and fast, accurate statistical energy modeling.

This repository contains tutorials, examples, documentation, and an artifact for the CiMLoop paper. All are accessible through the Docker container.

For the paper or presentations relating to CiMLoop, please visit the project website.

Quick Start

```bash git clone https://github.com/mit-emze/cimloop.git cd cimloop export DOCKER_ARCH=

arm64 is also supported, BUT IS NOT STABLE. Use at your own risk.

It is recommended to build Timeloop and Accelergy from source if you're

using an ARM machine.

docker-compose pull docker-compose up

Connect to the container and explore CiMLoop! The README.md file

in the workspace directory contains more information.

If you have permission issues, please see the instructions in the

docker-compose.yaml file on how to set the UID and GID.

```

The README.md in the workspace directory contains more information on how to use CiMLoop.

Tutorials

The Timeloop and Accelergy tutorials are a prerequisite for using CiMLoop as well, so please complete those first. CiMLoop tutorials are available in the workspace/tutorials directory.

CiMLoop Artifact

CiMLoop published results can be reproduced by running the models/arch/1_macro/*/_guide.ipynb notebooks and the tutorials/demo_speed_accuracy.ipynb notebook in the workspace.

CiMLoop for Photonic Accelerators

CiMLoop includes a model of the Albireo silicon photonic accelerator as described in "Architecture-Level Modeling of Photonic Deep Neural Network Accelerators" (Andrulis et al., ISPASS 2024).

Documentation

Documentation can be found at the following locations:

Contributing

If you would like to contribute models of memory cells, components, architectures, workloads, or anything else, please submit a pull request. We welcome contributions!

Citation

If you use CiMLoop in your research, please cite the papers in citations.bib.

Owner

  • Name: MIT Emze Group
  • Login: mit-emze
  • Kind: organization
  • Location: United States of America

MIT Emze Group Projects

Citation (citations.bib)

@inproceedings{cimloop,
  author={Andrulis, Tanner and Emer, Joel S. and Sze, Vivienne},
  booktitle={2024 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS)}, 
  title={{CiMLoop}: A Flexible, Accurate, and Fast Compute-In-Memory Modeling Tool}, 
  year={2024},
  volume={},
  number={},
  pages={10-23},
  keywords={Compute-In-Memory;Processing-In-Memory;Analog;Deep Neural Networks;Systems;Hardware;Modeling;Open-Source},
  doi={10.1109/ISPASS61541.2024.00012}}
}
@inproceedings{timeloop,
  title={Timeloop: A systematic approach to dnn accelerator evaluation},
  author={Parashar, Angshuman and Raina, Priyanka and Shao, Yakun Sophia and
  Chen, Yu-Hsin and Ying, Victor A and Mukkara, Anurag and Venkatesan,
  Rangharajan and Khailany, Brucek and Keckler, Stephen W and Emer, Joel},
  booktitle={2019 IEEE international symposium on performance analysis of
  systems and software (ISPASS)}, pages={304--315}, year={2019},
  organization={IEEE}
}
@inproceedings{accelergy,
  title={Accelergy: An architecture-level energy estimation methodology for accelerator designs},
  author={Wu, Yannan Nellie and Emer, Joel S and Sze, Vivienne},
  booktitle={2019 IEEE/ACM International Conference on Computer-Aided Design (ICCAD)},
  pages={1--8},
  year={2019},
  organization={IEEE}
}
@inproceedings {ruby,
  author = {M. Horeni and P. Taheri and P. Tsai and A. Parashar and J. Emer and S. Joshi},
  booktitle = {2022 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS)},
  title = {Ruby: Improving Hardware Efficiency for Tensor Algebra Accelerators Through Imperfect Factorization},
  year = {2022},
  volume = {},
  issn = {},
  pages = {254-266},
  keywords = {deep learning;tensors;codes;neural networks;computer architecture;parallel processing;hardware},
  doi = {10.1109/ISPASS55109.2022.00039},
  url = {https://doi.ieeecomputersociety.org/10.1109/ISPASS55109.2022.00039},
  publisher = {IEEE Computer Society},
  address = {Los Alamitos, CA, USA},
  month = {may}
}

# If you use CiMloop for photonic accelerator modeling, please cite the following paper:
@inproceedings {cimloop_photonics,
  author={Andrulis, Tanner and Chaudhry, Gohar Irfan and Suriyakumar, Vinith M. and Emer, Joel S. and Sze, Vivienne},
  booktitle={2024 IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS)}, 
  title={Architecture-Level Modeling of Photonic Deep
Neural Network Accelerators}, 
  year={2024},
  volume={},
  number={},
  pages={},
  keywords={photonics, optical computing, photonic computing, compute-in-memory, modeling, accelerator},
  doi={}
}

# If you use the built-in ADC model, please cite the following paper:
@misc{andrulis2024modeling,
      title={Modeling Analog-Digital-Converter Energy and Area for Compute-In-Memory Accelerator Design}, 
      author={Tanner Andrulis and Ruicong Chen and Hae-Seung Lee and Joel S. Emer and Vivienne Sze},
      year={2024},
      eprint={2404.06553},
      archivePrefix={arXiv},
      primaryClass={cs.AR}
}

# If you use any of the NeuroSim components (row drivers, column drivers, memory
cells), please cite the following paper:
@ARTICLE{DNN+NeuroSim,  
  author={Peng, Xiaochen and Huang, Shanshi and Jiang, Hongwu and Lu, Anni and Yu, Shimeng}, 
  journal={IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems}, 
  title={{DNN+NeuroSim} {V2.0}: An End-to-End Benchmarking Framework for Compute-in-Memory Accelerators for On-Chip Training},   year={2021}, volume={40},  number={11},
  pages={2306-2319},  doi={10.1109/TCAD.2020.3043731}
}

GitHub Events

Total
  • Issues event: 14
  • Watch event: 23
  • Issue comment event: 21
  • Push event: 7
  • Pull request event: 4
  • Fork event: 18
Last Year
  • Issues event: 14
  • Watch event: 23
  • Issue comment event: 21
  • Push event: 7
  • Pull request event: 4
  • Fork event: 18

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 8
  • Total pull requests: 1
  • Average time to close issues: 6 months
  • Average time to close pull requests: N/A
  • Total issue authors: 7
  • Total pull request authors: 1
  • Average comments per issue: 2.63
  • Average comments per pull request: 0.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 7
  • Pull requests: 1
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 6
  • Pull request authors: 1
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • kmkluaymaii (3)
  • jinquan-shi (1)
  • samjijina (1)
  • shhj9787 (1)
  • Adrija-debug (1)
  • Maeesha180 (1)
  • Farbin (1)
  • jacopopalumbo01 (1)
  • tony-liu1996 (1)
  • EricHsin (1)
  • doughyunk (1)
  • phemashekar-semron (1)
  • Muddassar180009 (1)
Pull Request Authors
  • sopsahl (1)
Top Labels
Issue Labels
Pull Request Labels