Recent Releases of lwtnn

lwtnn - v2.14.1

This adds a few new features since the last release:

  • Various updates to unit tests, thanks @matthewfeickert
  • Add support for keras "add" layer, thanks @tprocter46
  • Updates to support tensorflow 2.11.0
  • Add support for 1d CNNs, thanks to @jcvoigt
  • Stop overwriting CMAKECXXFLAGS, which was breaking larger projects

- C++
Published by dguest over 1 year ago

lwtnn - Version 2.13

This adds a few new features and bug fixes: - Add SimpleRNN layer (thanks @laurilaatu) - Add Python 3.10 to the CI testing matrix, and some pre-commit hooks (@matthewfeickert) - Allow more operations to be wrapped in keras TimeDistributed, specifically ones that are only used in training (thanks @sfranchel) - Update SKLearn converter: port to python 3, add support for logistic output activation, add sanity checks (thanks @TJKhoo) - Fix compilation error with Eigen 3.4.0, which appeared in lwtnn version 2.10.

- C++
Published by dguest almost 4 years ago

lwtnn - Version 2.12.1

This is a bugfix release. It should resolve some issues some people saw where the sigmoid activation function would throw floating point overflow warnings.

To avoid the FPEs we return 0 when the sigmoid input is less than -30, and 1 when the input is greater than 30. This is the same behavior we had prior to e3622dd. The outputs should be unchanged to O(1e-6), if they change at all.

- C++
Published by dguest over 4 years ago

lwtnn - Version 2.12

This release fixes a number of bugs: - Properly handle InputLayer in Keras Sequential models (thanks @QuantumDancer) - Fix bug supporting TimeDistributed BatchNormalization layers - Fix bugs in lwtnn-split-keras-network.py - Fix some annoying warnings in compilation - Fixes for compilation errors in gcc11 (thanks @matthewfeickert) - Replace broken link for boost in the minimal install (now getting boost from a public CERN URL)

There were some tweaks to reduce element-wise calls through std::function in activation functions. This should mean a lot less pointer dereferencing.

There were also some improvements to overall code health, all from @matthewfeickert: - Add Github Actions based CI - Code linting, also add pre-commit hooks to run linting, and enforce some of this in CI - Expand text matrix to include C++17, C++20, and gcc11 - Specify dependency versions more carefully, increase the range of allowed CMake versions

- C++
Published by dguest over 4 years ago

lwtnn - Version 2.11.1

This release fixes bugs in the CMake code

CMake would build the project fine, but the project would be installed incorrectly. Thanks to @krasznaa for pointing out the problem.

- C++
Published by dguest about 5 years ago

lwtnn - Version 2.11

Warning: Installation with CMake doesn't work properly with this version! A fix is on the way!

Major Change: Templated Classes

The biggest change in this release is that all the core matrix classes are now templated. Thanks to @benjaminhuth, who implemented it as a way to make networks differentiable (with autodiff: https://github.com/autodiff/autodiff/). It might also be useful to lower the memory footprint of NNs by using float rather than double as the elements of Eigen matrices.

The new templated classes can be found in include/lwtnn/generic and in the lwt::generic:: namespace. To avoid breaking backward compatibility, all the old files still exist in the original location. Where possible these files contain wrappers on the new templated version. In most cases, building against these wrapper classes won't force you to include Eigen headers (as was the case before).

Major Addition: FastGraph

It turns out that looking up every input in a std::map<std::string,double> is really slow in some cases! This release adds a new interface, FastGraph which takes its inputs as std::vector<Eigen::VectorX<T>> (for scalar inputs) or std::vector<Eigen::MatrixX<T>> (for sequences), and returns an Eigen::VectorX<T>. The FastGraph interface is templated, so you can use any element type supported by Eigen.

Minor Fixes

There are quite a few fixes, mostly in the python converter code:

  • The keras converter now supports activation layers in sequences (#99)
  • For those using BUILTIN_EIGEN with CMake, bumped the version of Eigen from 3.2.9 to 3.3.7 (thanks @ductng). If you're not using BUILTIN_EIGEN this should have no effect on you.
  • Various compatibility fixes for newer versions of Keras, deprecated Travis settings, etc
  • lwtnn-split-keras-network.py no longer depends on keras

- C++
Published by dguest over 5 years ago

lwtnn - Version 2.10

This release adds a few minor things to python code. Nothing changes any C++ code but there are some fixes and extensions in the python:

  • Use a "swish" activation function which is more similar to that in keras-contrib (thanks @aghoshpub)
  • Some sequential models from more recent versions of keras were breaking keras2json.py, fixed this
  • Added a slightly more robust check for consistency between the CMake version and the tag in git

- C++
Published by dguest over 6 years ago

lwtnn - Version 2.9

This release adds several new features:

  • Inputs are read into LightweightGraph lazily. In some cases this makes it possible to evaluate parts of a multi-output graph without specifying all the inputs.
  • Sum layer to support deep sets.
  • Added an abs activation function.

We've also fixed a number of C++17 compiler warnings (thanks to @VukanJ) and added C++17 builds to the Travis build matrix.

- C++
Published by dguest over 6 years ago

lwtnn - Version 2.8.1

This is a bugfix release which only affects networks that used ELU activation functions.

Since version 2.8 the JSON files produced with kerasfunc2json.py were unreadable on the C++ side. These JSON files are now readable.

In addition, JSON files produced after this release should be readable on the C++ side in version 2.8.

- C++
Published by dguest almost 7 years ago

lwtnn - Version 2.8

This release introduces several parameterized activation functions: - The old ELU function had a hardcoded with alpha = 1.0. Other values of alpha are now allowed. - Added LeakyReLU (thanks @aghosh). - Added Swish (https://arxiv.org/abs/1710.05941, thanks again @aghosh)

- C++
Published by dguest over 7 years ago

lwtnn - Version 2.7.1

This release adds one bug fix to v2.7.

Previously the CMake build would include the system default version of boost when building executables. This causes problems on older operating systems where we override the default boost version with a newer one.

- C++
Published by dguest over 7 years ago

lwtnn - Version 2.7

Changes since the last release:

  • Various tweaks to the test executables
  • Fix bug where Softmax could only be used as the activation function in a dense layer (rather than a stand alone layer)
  • Fix a bug in the CMake installation process where some converter scripts weren't marked as executable
  • Remove some old converters: agile2json.py, julian2json.py, keras2json-old.py

- C++
Published by dguest over 7 years ago

lwtnn - Version 2.6

Changes since the last release:

  • Add (some) support for ELUs (thanks @demarley)
  • Sequence inputs now allow sequences with zero elements (fixes #80)
  • Add sklearn2json.py converter to cover some networks trained by scikit learn (thanks again to @demarley)
  • Various fixes for unit tests (thanks @jwsmithers)

- C++
Published by dguest about 8 years ago

lwtnn - Version 2.5

Main changes since the previous release:

  • Added the ability for Graph and LightweightGraph to return sequences
  • Add the sequential2graph.py script to convert the JSON files used to initialize LightweightNeuralNetwork into inputs to initialize LightweightGraph
  • Added a few regression tests
  • Fixed a bug which prevented compilation in some versions of clang (thanks Attila Krasznahorkay)

- C++
Published by dguest about 8 years ago

lwtnn - Version 2.4

This release is a major improvement when building with CMake:

  • CMake can automatically install Eigen and Boost, see README for details
  • Added CMake builds to the TravisCI build matrix
  • Make better use of CMake's ability to find existing boost and eigen libraries

We also made a few other minor improvements:

  • Added a utility lwtnn-split-keras-network.py to convert keras models saved with model.save(...) to something lwtnn can use
  • Add support for Activation layers in the Keras functional API
  • Fix several defects reported by coverity

- C++
Published by dguest over 8 years ago

lwtnn - Version 2.3

This is a minor update from version 2.2

Improvements:

  • Added support for time distributed dense layers
  • Added the option to build using CMake
  • Clean up documentation (still a work in progress)

- C++
Published by dguest over 8 years ago

lwtnn - Version 2.2

Version 2.2 adds support for Keras 2.

Other minor improvements to the Keras converters: - The converters now work with networks trained with Keras and the TensorFlow backend - Unit tests cover merge layers when working with the keras functional API - Layers that aren't needed for inference (e.g. dropout) are properly ignored

- C++
Published by dguest over 8 years ago

lwtnn - Version 2.1 Prerelease

The main improvement in version 2.1 is that now support the Keras functional API in the Graph and LightweightGraph classes. This also means that you can run arbitrary network graphs with multiple inputs and outputs, merging, and shared layers.

We also made a number of small changes and improvements: - The code is now distributed under the MIT Licence - Added lwtnn-count-parameters.py script, to count the number of parameters in a lwtnn json formatted model - All layers, stacks, and graphs, along with the high-level APIs, are now stateless to improve thread safety - The Keras converter prints a warning if you try to convert a file from a Keras version before 1.2 - The configuration headers have been further factorized into "low-level" LayerConfig and NodeConfig classes (which are used to initialize Graph and Stack) in NNLayerConfig.hh and the "high-level" configuration objects in lightweight_network_config.hh. - Improve the documentation: We'll try to keep a list of supported layers and Keras versions in the README from now on. - Fix an issue with numerical stability in the softmax activation function. - Clean up unit tests

- C++
Published by dguest almost 9 years ago

lwtnn - Release for Athena v21

We've added lots of new features since release v1.0: - New layers: we now support batch normalization, highway layers, and GRUs - Regression tests with Travis CI - Test scripts to compare Keras NNs evaluation with lwtnn

Also quite a lot of refactoring: - The classes have been split into "low-level" code in Stack and "high-level" interfaces in LightweightNeuralNetwork - The recurrent classes have been merged into Stack and LightweightNeuralNetwork

- C++
Published by dguest about 9 years ago

lwtnn - Athena Release 21

- C++
Published by dguest about 9 years ago