rosenna

Fast, minimally-invasive neural network inference library

https://github.com/comp-physics/rosenna

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 4 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.5%) to scientific vocabulary

Keywords

cfd fortran inference library model neural-network
Last synced: 6 months ago · JSON representation ·

Repository

Fast, minimally-invasive neural network inference library

Basic Info
  • Host: GitHub
  • Owner: comp-physics
  • License: mit
  • Language: Python
  • Default Branch: master
  • Homepage:
  • Size: 1.03 MB
Statistics
  • Stars: 20
  • Watchers: 2
  • Forks: 2
  • Open Issues: 0
  • Releases: 3
Topics
cfd fortran inference library model neural-network
Created almost 4 years ago · Last pushed about 1 year ago
Metadata Files
Readme License Citation

README.md

roseNNa banner

RoseNNa is a fast, portable, and minimally-intrusive library for neural network inference. It can run inference on neural networks in ONNX format, which is universal and can be used with PyTorch, TensorFlow, Keras, and more. RoseNNa's intended use case is embedding neural networks in Fortran- and C-based HPC codebases. One compiles RoseNNa and links it to an existing PDE (e.g., CFD) solver written in C or Fortran. You can then evaluate your neural network from the PDE solver at Fortran/C speeds.

RoseNNa currently supports RNNs, CNNs, and MLPs. The library is optimized Fortran and outperforms PyTorch (by a factor between 2 and 5x) for the relatively small neural networks used in physics applications, like computational fluid dynamics. RoseNNa is described in detail in A. Bati, S. H. Bryngelson (2024) Comp. Phys. Comm., 296, 109052..

Hello RoseNNa

``` fortran program hello_roseNNa

use rosenna implicit none

real, dimension(1,1,28,28) :: input ! model inputs real, dimension(1,5) :: output ! model outputs

call initialize() ! reads weights call use_model(input, output) ! run inference

end program ```

This example program links to the roseNNa library, parses the model inputs, and runs inference on the loaded library. Only a few lines are required to use the library: use rosenna, call initialize(), and call use_model(args).

Dependencies

We have minimal dependencies. For example, on MacOS you can get away with just brew install wget make cmake coreutils gcc pip install torch onnx numpy fypp onnxruntime pandas

Basic Example

Here is a quick example of how roseNNa works. With just a few steps, you can see how to convert a basic feed-forward neural network originally built with PyTorch into usable, accurate code in Fortran.

First, cd into the fLibrary/ directory.

Then, create PyTorch model and convert to ONNX: bash python ../goldenFiles/gemm_small/gemm_small.py

Read and interpret the corresponding output files from the last step via bash python modelParserONNX.py -w ../goldenFiles/gemm_small/gemm_small.onnx -f ../goldenFiles/gemm_small/gemm_small_weights.onnx and compile the library bash make library

Compile the "source files" (capiTester.f90) and link to the library file created: bash gfortran -c ../examples/capiTester.f90 -IobjFiles/ gfortran -o flibrary libcorelib.a capiTester.o ./flibrary and finally check if the output from PyTorch model matches roseNNa's output bash python ../test/testChecker.py gemm_small

Compiling roseNNa

  1. Save the neural network model that needs to be converted

    Make sure to refer to the specific library's documentation about how to save the model.

  2. Convert the saved model to an ONNX format

    Details on converting a saved model to ONNX format can be found on their website.

    Converting an LSTM?

    One important thing to note is sometimes ONNX enables optimizations that will change how the weights are stored internally (this will happen specifically for LSTMs). When converting from any library to ONNX, one should load 2 files: one with optimization and one without. This may or may not apply to all library to ONNX conversions, but here is an example using PyTorch (one with do_constant_folding=True and another with do_constant_folding=False.

```python

MODEL STRUCTURE FILE

torch.onnx.export(model, # model being run (inp, hidden), # model input (or a tuple for multiple inputs) filePath+"lstmgemm.onnx", # where to save the model (can be a file or file-like object) exportparams=True, # store the trained parameter weights inside the model file opsetversion=12, # the ONNX version to export the model to doconstantfolding=True, # whether to execute constant folding for optimization inputnames = ['input', 'hiddenstate','cellstate'], # the model's input names output_names = ['output'], # the model's output names )

MODEL WEIGHTS FILE

torch.onnx.export(model, # model being run (inp, hidden), # model input (or a tuple for multiple inputs) filePath+"lstmgemmweights.onnx", # where to save the model (can be a file or file-like object) exportparams=True, # store the trained parameter weights inside the model file opsetversion=12, # the ONNX version to export the model to doconstantfolding=False, # whether to execute constant folding for optimization inputnames = ['input', 'hiddenstate','cellstate'], # the model's input names outputnames = ['output'], # the model's output names ) ```

  1. Preprocess the model

fLibrary/ holds the library files that recreate and run inference on the model. Run python modelParserONNX.py -f path/to/model/structure -w path/to/weights/file to reconstruct the model.

  1. Compiling the library

Then, in the same /fLibrary directory, run make library. This compiles the library into libcorelib.a, which is required to link other *.o files with the library. This library file is now ready to be integrated into any Fortran/C workflow.

Fortran use

One can compile a Fortran example (like the Hello RoseNNa example above) by specifying the location of the module files and linking the library to other program files. In practice, this looks like shell gfortran -c *.f90 -Ipath/to/objFiles gfortran -o flibrary path/to/libcorelib.a *.o ./flibrary

C use

One can readily call roseNNa from C. Compile roseNNa, then use the following C program as an example: ```c void usemodel(double * i0, double * o0); void initialize(char * modelfile, char * weights_file);

int main(void) {

double input[1][2] = {1,1};
double out[1][3];
initialize("onnxModel.txt","onnxWeights.txt");
use_model(input, out);

for (int i = 0; i < 3; i++) {
    printf("%f ",b[0][i]);
}

} and compile it as shell gcc -c *.c gfortran -o capi path/to/libcorelib.a *.o ./capi ```

Further documentation

Please see this document on how to extend roseNNa to new network models and this document on the details of the roseNNa pipeline.

Citation

You can cite this work as bibtex @article{bati24, author = {Bati, A. and Bryngelson, S. H.}, title = {{RoseNNa: A} performant, portable library for neural network inference with application to computational fluid dynamics}, journal = {Computer Physics Communications}, volume = {296}, pages = {109052}, year = {2024}, doi = {10.1016/j.cpc.2023.109052}, }

Owner

  • Name: Computational Physics @ GT CSE
  • Login: comp-physics
  • Kind: organization
  • Email: shb@gatech.edu

A computational physics research group with PI Spencer Bryngelson

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - family-names: Bati
    given-names: Ajay
  - family-names: Bryngelson
    given-names: Spencer
    orcid: https://orcid.org/0000-0003-1750-7265
title: "roseNNa"
version: 0.1
doi: 10.5281/zenodo.7334627
date-released: 2022-11-18

GitHub Events

Total
  • Release event: 1
  • Watch event: 3
  • Push event: 5
  • Create event: 1
Last Year
  • Release event: 1
  • Watch event: 3
  • Push event: 5
  • Create event: 1

Dependencies

.github/workflows/CI.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v2 composite