lwtnn

light NN client

https://github.com/lwtnn/lwtnn

Science Score: 77.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, zenodo.org
  • Committers with academic emails
    9 of 22 committers (40.9%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (16.7%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

light NN client

Basic Info
  • Host: GitHub
  • Owner: lwtnn
  • License: mit
  • Language: C++
  • Default Branch: master
  • Size: 594 KB
Statistics
  • Stars: 109
  • Watchers: 4
  • Forks: 54
  • Open Issues: 26
  • Releases: 20
Created over 10 years ago · Last pushed over 1 year ago
Metadata Files
Readme Contributing Citation

README.md

Lightweight Trained Neural Network

Build Status Scan Status DOI

What is this?

The code comes in two parts:

  1. A set of scripts to convert saved neural networks to a standard JSON format
  2. A set of classes which reconstruct the neural network for application in a C++ production environment

The main design principles are:

  • Minimal dependencies: The C++ code depends on C++11, Eigen, and boost PropertyTree. The converters have additional requirements (Python3 and h5py) but these can be run outside the C++ production environment.

  • Easy to extend: Should cover 95% of deep network architectures we would realistically consider.

  • Hard to break: The NN constructor checks the input NN for consistency and fails loudly if anything goes wrong.

We also include converters from several popular formats to the lwtnn JSON format. Currently the following formats are supported: - Scikit Learn - Keras (most popular, see below)

Why are we doing this?

Our underlying assumption is that training and inference happen in very different environments: we assume that the training environment is flexible enough to support modern and frequently-changing libraries, and that the inference environment is much less flexible.

If you have the flexibility to run any framework in your production environment, this package is not for you. If you want to apply a network you've trained with Keras in a 6M line C++ production framework that's only updated twice a year, you'll find this package very useful.

Getting the code

Clone the project from github:

bash git clone git@github.com:lwtnn/lwtnn.git

Then compile with make. If you have access to a relatively new version of Eigen and Boost everything should work without errors.

If you have CMake, you can build with no other dependencies:

bash mkdir build cd build cmake -DBUILTIN_BOOST=true -DBUILTIN_EIGEN=true .. make -j 4

Running a full-chain test

To run the tests first install h5py in a Python 3 environment, e.g. using pip

python -m pip install -r tests/requirements.txt

Starting from the directory where you built the project, run

./tests/test-GRU.sh

(note that if you ran cmake this is ../tests/test-GRU.sh)

You should see some printouts that end with *** Success! ***.

Quick Start With Keras Functional API

The following instructions apply to the model/functional API in Keras. To see the instructions relevant to the sequential API, go to Quick Start With sequential API.

After building, there are some required steps:

1) Save your network output file

Make sure you saved your architecture and weights file from Keras, and created your input variable file. See the lwtnn Keras Converter wiki page for the correct procedure in doing all of this.

Then

lwtnn/converters/kerasfunc2json.py architecture.json weights.h5 inputs.json > neural_net.json

Helpful hint: if you do lwtnn/converters/kerasfunc2json.py architecture.json weights.h5 it creates a skeleton of an input file for you, which can be used in the above command!

2) Test your saved output file

A good idea is to test your converted network:

./lwtnn-test-lightweight-graph neural_net.json

A basic regression test is performed with a bunch of random numbers. This test just ensures that lwtnn can in fact read your NN.

3) Apply your saved neural network within C++ code

```C++ // Include several headers. See the files for more documentation. // First include the class that does the computation

include "lwtnn/LightweightGraph.hh"

// Then include the json parsing functions

include "lwtnn/parse_json.hh"

...

// get your saved JSON file as an std::istream object std::ifstream input("path-to-file.json"); // build the graph LightweightGraph graph(parsejsongraph(input));

...

// fill a map of input nodes std::map > inputs; inputs["inputnode"] = {{"value", value}, {"value2", value2}}; inputs["anotherinputnode"] = {{"anothervalue", another_value}}; // compute the output values std::map outputs = graph.compute(inputs); ```

After the constructor for the class LightweightNeuralNetwork is constructed, it has one method, compute, which takes a map<string, double> as an input and returns a map of named outputs (of the same type). It's fine to give compute a map with more arguments than the NN requires, but if some argument is missing it will throw an NNEvaluationException.

All inputs and outputs are stored in std::maps to prevent bugs with incorrectly ordered inputs and outputs. The strings used as keys in the map are specified by the network configuration.

Supported Layers

In particular, the following layers are supported as implemented in the Keras sequential and functional models:

| | K sequential | K functional | |-----------------|--------------|---------------| | Dense | yes | yes | | Normalization | See Note 1 | See Note 1 | | Maxout | yes | yes | | Highway | yes | yes | | LSTM | yes | yes | | GRU | yes | yes | | Embedding | sorta | issue | | Concatenate | no | yes | | TimeDistributed | no | yes | | Sum | no | yes |

Note 1: Normalization layers (i.e. Batch Normalization) are only supported for Keras 1.0.8 and higher.

Supported Activation Functions

| Function | Implemented? | |---------------|--------------| | ReLU | Yes | | Sigmoid | Yes | | Hard Sigmoid | Yes | | Tanh | Yes | | Softmax | Yes | | ELU | Yes | | LeakyReLU | Yes | | Swish | Yes |

The converter scripts can be found in converters/. Run them with -h for more information.

Have problems?

For more in-depth documentation please see the lwtnn wiki.

If you find a bug in this code, or have any ideas, criticisms, etc, please email me at dguest@cern.ch.

Owner

  • Name: lwtnn
  • Login: lwtnn
  • Kind: organization

Citation (CITATION.cff)

cff-version: 1.2.0
message: "Please cite the following works when using this software."
type: software
authors:
- family-names: "Guest"
  given-names: "Daniel"
  orcid: "https://orcid.org/0000-0002-4305-2295"
  affiliation: "Humboldt-Universität zu Berlin"

- family-names: "Smith"
  given-names: "Joshua"
  orcid: "https://orcid.org/0000-0003-1819-4985"
  affiliation: "Wyatt AI"

- family-names: "Paganini"
  given-names: "Michela"
  orcid: "https://orcid.org/0000-0003-4102-8002"
  affiliation: "Google DeepMind"

- family-names: "Kagan"
  given-names: "Michael"
  orcid: "https://orcid.org/0000-0002-3386-6869"
  affiliation: "SLAC"

- family-names: "Lanfermann"
  given-names: "Marie"
  orcid: "https://orcid.org/0000-0002-2938-2757"
  affiliation: "Université de Genève"

- family-names: "Krasznahorkay"
  given-names: "Attila"
  orcid: "https://orcid.org/0000-0002-6468-1381"
  affiliation: "CERN"

- family-names: "Marley"
  given-names: "Daniel"
  orcid: "https://orcid.org/0000-0002-6290-078X"
  affiliation: "Intelinair"

- family-names: "Ghosh"
  given-names: "Aishik"
  orcid: "https://orcid.org/0000-0003-0819-1553"
  affiliation: "University of California, Irvine"

- family-names: "Huth"
  given-names: "Benjamin"
  orcid: "https://orcid.org/0000-0002-3163-1062"
  affiliation: "Universität Regensburg"

- family-names: "Feickert"
  given-names: "Matthew"
  orcid: "https://orcid.org/0000-0003-4124-7862"
  affiliation: "University of Wisconsin-Madison"
title: "lwtnn: v2.13"
version: 2.13
doi: 10.5281/zenodo.597221
repository-code: "https://github.com/lwtnn/lwtnn/releases/tag/v2.13"
url: "https://github.com/lwtnn/lwtnn/"
keywords:
  - physics
  - machine learning
  - neural networks
license: "MIT"

GitHub Events

Total
  • Issues event: 3
  • Watch event: 1
  • Issue comment event: 13
  • Push event: 1
  • Pull request review event: 5
  • Pull request review comment event: 5
  • Pull request event: 3
  • Fork event: 2
Last Year
  • Issues event: 3
  • Watch event: 1
  • Issue comment event: 13
  • Push event: 1
  • Pull request review event: 5
  • Pull request review comment event: 5
  • Pull request event: 3
  • Fork event: 2

Committers

Last synced: almost 3 years ago

All Time
  • Total Commits: 414
  • Total Committers: 22
  • Avg Commits per committer: 18.818
  • Development Distribution Score (DDS): 0.3
Top Committers
Name Email Commits
Dan Guest d****t@c****h 290
Joshua Wyatt Smith j****h@c****h 47
Matthew Feickert m****t@c****h 16
Daniel Hay Guest d****t@g****m 10
Michela Paganini m****1@h****m 8
jwsmithers j****e@h****m 8
makagan m****n@g****m 7
Attila Krasznahorkay a****y@c****h 7
Daniel Edison Marley d****y@c****h 6
laurilaatu l****u@u****m 2
Marie89 m****n@c****h 2
Stefano Fanchellucci 9****l@u****m 1
Benjamin b****h@u****e 1
Benjamin Rottler b****r@c****h 1
VukanJ v****c@u****u 1
benjaminhuth 3****h@u****m 1
aighosh g****h@l****r 1
aghoshpub 3****b@u****m 1
ductng d****g@g****t 1
tprocter46 5****6@u****m 1
TJKhoo k****o@c****h 1
scott snyder s****s@k****a 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: over 2 years ago

All Time
  • Total issues: 33
  • Total pull requests: 68
  • Average time to close issues: 3 months
  • Average time to close pull requests: 7 days
  • Total issue authors: 15
  • Total pull request authors: 15
  • Average comments per issue: 3.33
  • Average comments per pull request: 1.88
  • Merged pull requests: 66
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 5
  • Average time to close issues: N/A
  • Average time to close pull requests: 4 days
  • Issue authors: 1
  • Pull request authors: 4
  • Average comments per issue: 1.0
  • Average comments per pull request: 1.4
  • Merged pull requests: 5
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • dguest (13)
  • matthewfeickert (7)
  • JCZeng1 (2)
  • benjaminhuth (1)
  • AhmedMarkhoos (1)
  • frederikfaye (1)
  • ntadej (1)
  • iarspider (1)
  • MicheleFG (1)
  • karavdin (1)
  • philippwindischhofer (1)
  • callum-mccracken (1)
  • rdisipio (1)
  • cyrilbecot (1)
  • tprocter46 (1)
Pull Request Authors
  • dguest (31)
  • matthewfeickert (17)
  • jwsmithers (6)
  • benjaminhuth (2)
  • laurilaatu (2)
  • demarley (2)
  • aghoshpub (2)
  • jcvoigt (1)
  • QuantumDancer (1)
  • VukanJ (1)
  • fwinkl (1)
  • ductng (1)
  • sfranchel (1)
  • tprocter46 (1)
  • TJKhoo (1)
Top Labels
Issue Labels
bug (4) regression test (3) documentation (1) help wanted (1) new layer (1)
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads: unknown
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 3
  • Total maintainers: 1
spack.io: lwtnn

Lightweight Trained Neural Network.

  • Versions: 3
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent repos count: 0.0%
Forks count: 14.2%
Stargazers count: 17.7%
Average: 22.3%
Dependent packages count: 57.3%
Maintainers (1)
Last synced: 6 months ago

Dependencies

tests/requirements.txt pypi
  • h5py >=2.9.0 test
.github/workflows/ci.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.github/workflows/lint.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite