Larq

Larq: An Open-Source Library for Training Binarized Neural Networks - Published in JOSS (2020)

https://github.com/larq/larq

Science Score: 95.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 4 DOI reference(s) in README and JOSS metadata
  • Academic publication links
    Links to: joss.theoj.org
  • Committers with academic emails
    2 of 18 committers (11.1%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
    Published in Journal of Open Source Software

Keywords

binarized-neural-networks binder deep-learning keras larq machine-learning python quantized-neural-networks tensorflow

Keywords from Contributors

simulations cryptocurrencies graph-generation mesh physics energy-system mathematics

Scientific Fields

Engineering Computer Science - 40% confidence
Last synced: 4 months ago · JSON representation

Repository

An Open-Source Library for Training Binarized Neural Networks

Basic Info
  • Host: GitHub
  • Owner: larq
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Homepage: https://larq.dev
  • Size: 1020 KB
Statistics
  • Stars: 720
  • Watchers: 29
  • Forks: 85
  • Open Issues: 30
  • Releases: 38
Topics
binarized-neural-networks binder deep-learning keras larq machine-learning python quantized-neural-networks tensorflow
Created almost 7 years ago · Last pushed over 1 year ago
Metadata Files
Readme Contributing License Code of conduct

README.md

logo

Codecov PyPI - Python Version PyPI PyPI - License DOI Code style: black

Larq is an open-source deep learning library for training neural networks with extremely low precision weights and activations, such as Binarized Neural Networks (BNNs).

Existing deep neural networks use 32 bits, 16 bits or 8 bits to encode each weight and activation, making them large, slow and power-hungry. This prohibits many applications in resource-constrained environments. Larq is the first step towards solving this. It is designed to provide an easy to use, composable way to train BNNs (1 bit) and other types of Quantized Neural Networks (QNNs) and is based on the tf.keras interface. Note that efficient inference using a trained BNN requires the use of an optimized inference engine; we provide these for several platforms in Larq Compute Engine.

Larq is part of a family of libraries for BNN development; you can also check out Larq Zoo for pretrained models and Larq Compute Engine for deployment on mobile and edge devices.

Getting Started

To build a QNN, Larq introduces the concept of quantized layers and quantizers. A quantizer defines the way of transforming a full precision input to a quantized output and the pseudo-gradient method used for the backwards pass. Each quantized layer requires an input_quantizer and a kernel_quantizer that describe the way of quantizing the incoming activations and weights of the layer respectively. If both input_quantizer and kernel_quantizer are None the layer is equivalent to a full precision layer.

You can define a simple binarized fully-connected Keras model using the Straight-Through Estimator the following way:

python model = tf.keras.models.Sequential( [ tf.keras.layers.Flatten(), larq.layers.QuantDense( 512, kernel_quantizer="ste_sign", kernel_constraint="weight_clip" ), larq.layers.QuantDense( 10, input_quantizer="ste_sign", kernel_quantizer="ste_sign", kernel_constraint="weight_clip", activation="softmax", ), ] )

This layer can be used inside a Keras model or with a custom training loop.

Examples

Check out our examples on how to train a Binarized Neural Network in just a few lines of code:

Installation

Before installing Larq, please install:

  • Python version 3.7, 3.8, 3.9, or 3.10
  • Tensorflow version 1.14, 1.15, 2.0, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7, 2.8, 2.9, or 2.10: shell pip install tensorflow # or tensorflow-gpu

You can install Larq with Python's pip package manager:

shell pip install larq

About

Larq is being developed by a team of deep learning researchers and engineers at Plumerai to help accelerate both our own research and the general adoption of Binarized Neural Networks.

Owner

  • Name: larq
  • Login: larq
  • Kind: organization
  • Location: London - Amsterdam

An Open-Source Deep Learning Library for Training Binarized Neural Networks

JOSS Publication

Larq: An Open-Source Library for Training Binarized Neural Networks
Published
January 16, 2020
Volume 5, Issue 45, Page 1746
Authors
Lukas Geiger ORCID
Plumerai Research
Plumerai Team
Plumerai Research
Editor
Yuan Tang ORCID
Tags
python tensorflow keras deep-learning machine-learning binarized-neural-networks quantized-neural-networks efficient-deep-learning

Papers & Mentions

Total mentions: 1

FPGA-Based Acceleration on Additive Manufacturing Defects Inspection
Last synced: 2 months ago

GitHub Events

Total
  • Issues event: 9
  • Watch event: 21
  • Issue comment event: 27
  • Fork event: 2
Last Year
  • Issues event: 9
  • Watch event: 21
  • Issue comment event: 27
  • Fork event: 2

Committers

Last synced: 5 months ago

All Time
  • Total Commits: 735
  • Total Committers: 18
  • Avg Commits per committer: 40.833
  • Development Distribution Score (DDS): 0.507
Past Year
  • Commits: 1
  • Committers: 1
  • Avg Commits per committer: 1.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Lukas Geiger l****r 362
dependabot[bot] 4****] 212
dependabot-preview[bot] 2****] 72
Leon Overweel l****l@g****m 20
Adam Hillier 7****r 14
Koen Helwegen k****n@p****m 12
MariaHeuss 5****s 10
Jelmer Neeven 1****n 8
James Widdicombe j****6@g****m 8
Koen Helwegen k****n@g****m 6
Arash Bakhtiari b****r@i****e 3
Simon Brugman s****n 2
Peter Fackeldey p****y@r****e 1
Joschua-Conrad 3****d 1
Tim de Bruin t****m@p****m 1
Tom Bannink t****k@g****m 1
imgbot[bot] 3****] 1
rnusselder 4****r 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 45
  • Total pull requests: 157
  • Average time to close issues: 22 days
  • Average time to close pull requests: 10 days
  • Total issue authors: 11
  • Total pull request authors: 6
  • Average comments per issue: 1.36
  • Average comments per pull request: 0.32
  • Merged pull requests: 130
  • Bot issues: 0
  • Bot pull requests: 73
Past Year
  • Issues: 6
  • Pull requests: 0
  • Average time to close issues: 6 days
  • Average time to close pull requests: N/A
  • Issue authors: 3
  • Pull request authors: 0
  • Average comments per issue: 5.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • lgeiger (28)
  • koenhelwegen (5)
  • Oslomayor (2)
  • x7779 (2)
  • Sameeksha0709 (2)
  • jamescook106 (1)
  • vfdev-5 (1)
  • 17387990341 (1)
  • dependabot[bot] (1)
  • Junshuai1024 (1)
  • Nannigalaxy (1)
Pull Request Authors
  • dependabot[bot] (97)
  • lgeiger (74)
  • jamescook106 (4)
  • koenhelwegen (3)
  • RanHomri (1)
  • yotamazriel (1)
Top Labels
Issue Labels
documentation (12) feature (7) API (4) question (2) dependencies (1) python (1) good first issue (1)
Pull Request Labels
dependencies (97) python (87) documentation (12) github_actions (10) internal-improvement (7) skip-changelog (4) feature (4) bug (1) breaking-change (1)

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 1,315 last-month
  • Total dependent packages: 1
  • Total dependent repositories: 8
  • Total versions: 37
  • Total maintainers: 1
pypi.org: larq

An Open Source Machine Learning Library for Training Binarized Neural Networks

  • Versions: 37
  • Dependent Packages: 1
  • Dependent Repositories: 8
  • Downloads: 1,315 Last month
Rankings
Dependent packages count: 4.8%
Dependent repos count: 5.2%
Average: 5.5%
Downloads: 6.7%
Maintainers (1)
Last synced: 4 months ago

Dependencies

setup.py pypi
  • dataclasses *
  • importlib-metadata *
  • numpy *
  • terminaltables >=3.1.0
.github/workflows/lint.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.github/workflows/publish.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • pypa/gh-action-pypi-publish master composite
.github/workflows/release-notes.yml actions
  • toolmantim/release-drafter v5.22.0 composite
.github/workflows/unittest.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite