nif

A library for dimensionality reduction on spatial-temporal PDE

https://github.com/pswpswpsw/nif

Science Score: 36.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org, zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.2%) to scientific vocabulary

Keywords

deep-learning dimensionality-reduction pde reduced-order-modeling tensorflow
Last synced: 6 months ago · JSON representation

Repository

A library for dimensionality reduction on spatial-temporal PDE

Basic Info
Statistics
  • Stars: 65
  • Watchers: 3
  • Forks: 13
  • Open Issues: 1
  • Releases: 2
Topics
deep-learning dimensionality-reduction pde reduced-order-modeling tensorflow
Created about 4 years ago · Last pushed almost 2 years ago
Metadata Files
Readme License Citation

README.md

Neural Implicit Flow (NIF): mesh-agnostic dimensionality reduction

example workflow LicenseDOI

animated

  • NIF is a mesh-agnostic dimensionality reduction paradigm for parametric spatial temporal fields. For decades, dimensionality reduction (e.g., proper orthogonal decomposition, convolutional autoencoders) has been the very first step in reduced-order modeling of any large-scale spatial-temporal dynamics.

  • Unfortunately, these frameworks are either not extendable to realistic industry scenario, e.g., adaptive mesh refinement, or cannot preceed nonlinear operations without resorting to lossy interpolation on a uniform grid. Details can be found in our paper.

  • NIF is built on top of Keras, in order to minimize user's efforts in using the code and maximize the existing functionality in Keras.

Highlights

animated

  • Built on top of Tensorflow 2.x with Keras model subclassing, hassle-free for many up-to-date advanced concepts and features

    ```python from nif import NIF

    set up the configurations, loading dataset, etc...

    modelori = NIF(...) modelopt = model_ori.build()

    modelopt.compile(optimizer, loss='mse') modelopt.fit(...)

    model_opt.predict(...) ```

  • Distributed learning: data parallelism across multiple GPUs on a single node

    ```python enablemultigpu = True cm = tf.distribute.MirroredStrategy().scope() if enablemultigpu else contextlib.nullcontext() with cm:

    # ...
    model.fit(...)
    # ...
    

    ```

  • Flexible training schedule: e.g., first some standard optimizer (e.g., Adam) then fine-tunning with L-BFGS

    ```python from nif.optimizers import TFPLBFGS

    load previous model

    newmodelori = NIF(cfgshapenet, cfgparameternet, mixedpolicy) newmodel.load_weights(...)

    prepare the dataset

    datafeature = ... # datalabel = ... #

    fine tune with L-BFGS

    lossfun = tf.keras.losses.MeanSquaredError() finetuner = TFPLBFGS(newmodel, lossfun, datafeature, datalabel, displayepoch=10) finetuner.minimize(rounds=200, maxiter=1000) newmodel.save_weights("./fine-tuned/ckpt") ```

  • Templates for many useful customized callbacks

```python # setting up the model # ...

# - tensorboard tensorboardcallback = tf.keras.callbacks.TensorBoard(logdir="./tb-logs", update_freq='epoch')

# - printing, model save checkpoints etc. class LossAndErrorPrintingCallback(tf.keras.callbacks.Callback): # ....

# - learning rate schedule def scheduler(epoch, lr): if epoch < 1000: return lr else: return 1e-4 scheduler_callback = tf.keras.callbacks.LearningRateScheduler(scheduler)

# - collecting callbacks into model.fit(...)

callbacks = [tensorboardcallback, LossAndErrorPrintingCallback(), schedulercallback] modelopt.fit(traindataset, epochs=nepoch, batchsize=batchsize, shuffle=False, verbose=0, callbacks=callbacks) ```

  • Simple extraction of subnetworks

    ```python model_ori = NIF(...)

    ....

    extract latent space encoder network

    modelptolr = modelori.modelptolr() lrpred = modelpto_lr.predict(...)

    extract latent-to-weight network: from latent representation to weights and biase of shapenet

    modellrtow = modelori.modellrtow() wpred = modellrto_w.predict(...)

    extract shapenet: inputs are weights and spatial coordinates, output is the field of interests

    modelxtougivenw = modelori.modelxtougivenw() upred = modelxtougiven_w.predict(...) ```

  • Get input-output Jacobian or Hessian. ```python model = ... # your keras.Model x = ... # your dataset

    define both the indices of target and source

    xindex = [0,1,2,3] yindex = [0,1,2,3,4]

    wrap up keras.Model using JacobianLayer

    from nif.layers import JacobianLayer yanddydxlayer = JacobianLayer(model, yindex, x_index)

    y, dydx = yanddydx_layer(x)

    modelwithjacobian = Model([x], [y, dydx])

    wrap up keras.Model using HessianLayer

    from nif.layers import HessianLayer yanddydxanddy2dx2layer = HessianLayer(model, yindex, x_index)

    y, dydx, dy2dx2 = yanddydxanddy2dx2_layer(x)

    modelwithjacobianandhessian = Model([x], [y, dydx, dy2dx2])

    ```

  • Data normalization for multi-scale problem

    • just simply feed n_para: number of parameters, n_x: input dimension of shapenet, n_target: output dimension of shapenet, and raw_data: numpy array with shape = (number of pointwise data points, number of features, target, coordinates, etc.)

    python from nif.data import PointWiseData data_n, mean, std = PointWiseData.minmax_normalize(raw_data=data, n_para=1, n_x=3, n_target=1)

  • Large-scale training with tfrecord converter

    • all you need is to prepare a BIG npz file that contains all the point-wise data
    • .get_tfr_meta_dataset will read all files under the searched directory that ends with .tfrecord

    ```python from nif.data.tfrdataset import TFRDataset fh = TFRDataset(nfeature=4, n_target=3)

    generating tfrecord files from a single big npz file (say gigabytes)

    fh.createfromnpz(...)

    prepare some model

    model = ... model.compile(...)

    obtaining a meta dataset

    metadataset = fh.gettfrmetadataset(...)

    start sub-dataset-batching

    for batchfile in metadataset: batchdataset = fh.gendatasetfrombatchfile(batchfile, batch_size) model.fit(...) ```

  • Save and load models (via Checkpoints only) ```python

    save the config

    model.save_config("config.json")

    save the weights

    model.saveweights("./savedweights/ckpt-{}/ckpt".format(epoch)")

    load the config

    with open("config.json", "r") as f: config = json.load(f)
    modelori = nif.NIF(**config) model = modelori.model()

    load the weights

    model.loadweights("./savedweights/ckpt-999/ckpt") ```

  • Network pruning and quantization

Google Colab Tutorial

  1. Hello world! A simple fitting on 1D travelling wave Open In Colab

    • learn how to use class nif.NIF
    • model checkpoints/restoration
    • mixed precision training
    • L-BFGS fine tuning
  2. Tackling multi-scale data Open In Colab

- learn how to use class `nif.NIFMultiScale`
- demonstrate the effectiveness of learning high frequency data
  1. Learning linear representation Open In Colab

    • learn how to use class nif.NIFMultiScaleLastLayerParameterized
    • demonstrate on a (shortened) flow over a cylinder case from an AMR solver
  2. Getting input-output derivatives is super easy Open In Colab

    • learn how to use nif.layers.JacobianLayer, nif.layers.HessianLayer
  3. Scaling to hundreds of GB data Open In Colab

    • learn how to use nif.data.tfr_dataset.TFRDataset to create tfrecord from npz
    • learn how to perform sub-dataset-batch training with model.fit
  4. Revisit NIF on multi-scale data with regularization Open In Colab

    • learn how to use L1 or L2 regularization for weights and bias in ParameterNet.
    • a demonstration for the failure of NIF-Multiscale in terms of increasing spatial interpolation when dealing with high-frequency signal.
      • this means you need to be cautious about increasing spatial sampling resolution when dealing with high-frequency signal.
    • learn that L2 or L1 regularization does not seem to help resolving the above issue.
  5. NIF Compression Open In Colab

    • learn how to use low magnititute pruning and quantization to compress ParameterNet
  6. Revisit NIF on multi-scale data: Sobolov training helps removing spurious signals Open In Colab

    • learn how to use nif.layers.JacobianLayer to perform Sobolov training
    • learn how to monitor different loss terms using customized Keras metrics
    • learn that feeding derivative information to the system help resolve the super-resolution issue

Requirements

python matplotlib numpy tensorflow-gpu tensorflow_probability==0.18.0 tensorflow_model_optimization==0.7.3

Issues, bugs, requests, ideas

Use the issues tracker to report bugs.

How to cite

If you find NIF is helpful to you, you can cite our JMLR paper in the following bibtex format

@article{JMLR:v24:22-0365, author = {Shaowu Pan and Steven L. Brunton and J. Nathan Kutz}, title = {Neural Implicit Flow: a mesh-agnostic dimensionality reduction paradigm of spatio-temporal data}, journal = {Journal of Machine Learning Research}, year = {2023}, volume = {24}, number = {41}, pages = {1--60}, url = {http://jmlr.org/papers/v24/22-0365.html} }

Contributors

License

LGPL-2.1 License

Owner

  • Name: Shaowu Pan
  • Login: pswpswpsw
  • Kind: user
  • Location: Troy, NY
  • Company: Rensselaer Polytechnic Institute

Assistant Professor of Aerospace Engineering

GitHub Events

Total
  • Issues event: 2
  • Watch event: 8
  • Pull request event: 1
  • Fork event: 2
Last Year
  • Issues event: 2
  • Watch event: 8
  • Pull request event: 1
  • Fork event: 2

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 3
  • Total pull requests: 4
  • Average time to close issues: about 1 month
  • Average time to close pull requests: 35 minutes
  • Total issue authors: 3
  • Total pull request authors: 2
  • Average comments per issue: 2.33
  • Average comments per pull request: 0.5
  • Merged pull requests: 3
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 1
  • Pull requests: 1
  • Average time to close issues: 35 minutes
  • Average time to close pull requests: N/A
  • Issue authors: 1
  • Pull request authors: 1
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • moyulyy0117 (1)
  • Joao-L-S-Almeida (1)
  • uribarri (1)
Pull Request Authors
  • yafshar (3)
  • washcycle (1)
Top Labels
Issue Labels
question (1)
Pull Request Labels
bug (1)

Dependencies

requirements.txt pypi
  • matplotlib *
  • numpy *
  • tensorflow_model_optimization *
  • tensorflow_probability *
setup.py pypi
  • x.strip *
.github/workflows/release.yaml actions
  • actions/checkout v1 composite
  • actions/setup-python v1 composite
  • pypa/gh-action-pypi-publish master composite
.github/workflows/run-tests.yml actions
  • actions/checkout v1 composite
  • actions/setup-python v1 composite
docs/requirements.txt pypi
  • matplotlib *
  • numpy *
  • tensorflow_model_optimization ==0.7.3
  • tensorflow_probability ==0.18.0