koopmanlab
A library for Koopman Neural Operator with Pytorch.
Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: sciencedirect.com -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (17.3%) to scientific vocabulary
Keywords
Repository
A library for Koopman Neural Operator with Pytorch.
Basic Info
Statistics
- Stars: 302
- Watchers: 7
- Forks: 24
- Open Issues: 9
- Releases: 3
Topics
Metadata Files
README.md
KoopmanLab is a package for Koopman Neural Operator with Pytorch.
For more information, please refer to the following paper, where we provid detailed mathematical derivations, computational designs, and code explanations. - "Koopman neural operator as a mesh-free solver of non-linear partial differential equations." Journal of Computational Physics (2024). See also the arXiv preprint arXiv:2301.10022 (2023). - "KoopmanLab: machine learning for solving complex physics equations." APL Machine Learning (2023).
Installation
KoopmanLab requires the following dependencies to be installed: - PyTorch >= 1.10 - Numpy >= 1.23.2 - Matplotlib >= 3.3.2
You can install KoopmanLab package via the following approaches:
- Install the stable version with
pip:
$ pip install koopmanlab
- Install the current version by source code with
pip:$ git clone https://github.com/Koopman-Laboratory/KoopmanLab.git $ cd KoopmanLab $ pip install -e .# Quick Start
If you install KoopmanLab successfully, you can use our model directly by:
``` python import koopmanlab as kp encoder = kp.models.encodermlp(tin, operatorsize) decoder = kp.models.decodermlp(tin, operatorsize) KNO1dmodel = kp.models.KNO1d(encoder, decoder, operatorsize, modes_x = 16, decompose = 6)
Input size [batch, x, tin] Output size [batch, x, tin] for once iteration
KNO2dmodel = kp.models.KNO2d(encoder, decoder, operatorsize, modesx = 10, modesy = 10, decompose = 6)
Input size [batch, x, tin] Output size [batch, x, tin] for once iteration
``` If you do not want to customize the algorithms for training, testing and plotting, we highly recommend that you use our basic APIs to build a Koopman model.
Usage
You can read demo_ns.py to learn about some basic APIs and workflow of KoopmanLab. If you want to run demo_ns.py, the following data need to be prepared in your computing resource.
- Dataset
If you want to generate Navier-Stokes Equation data by yourself, the data generation configuration file can be found in the following link.
Our package provides an easy way to create a Koopman neural operator model. ``` python import koopmanlab as kp MLPKNO2D = kp.model.koopman(backbone = "KNO2d", autoencoder = "MLP", device = device) MLPKNO2D = kp.model.koopman(backbone = "KNO2d", autoencoder = "MLP", o = o, m = m, r = r, tin = 10, device = device) MLPKNO_2D.compile()
Parameter definitions:
o: the dimension of the learned Koopman operator
f: the number of frequency modes below frequency truncation threshold
r: the power of the Koopman operator
T_in: the duration length of input data
device : if CPU or GPU is used for calculating
ViTKNO = kp.model.koopmanvit(decoder = "MLP", resolution=(64, 64), patchsize=(2, 2), inchans=1, outchans=1, headnum=16, embeddim=768, depth = 16, parallel = True, highfreq = True, device=device) ViT_KNO.compile()
Parameter definitions:
depth: the depth of each head
head_num: the number of heads
resolution: the spatial resolution of input data
patch_size: the size of each patch (i.e., token)
in_chans: the number of target variables in the data set
outchans: the number of predicted variables by ViT-KNO , which is usually same as inchans
embed_dim: the embeding dimension
parallel: if data parallel is applied
high_freq: if high-frequency information complement is applied
Once the model is compiled, an optimizer setting is required to run your own experiments. If you want a more customized setting of optimizer and scheduler, you could use any PyTorch method to create them and assign them to Koopman neural operator object, eg. `MLP_KNO_2D.optimizer` and `MLP_KNO_2D.scheduler`.
python
MLPKNO2D.optinit("Adam", lr = 0.005, stepsize=100, gamma=0.5)
If you use Burgers equation and Navier-Stokes equation data or the shallow water data provided by PDEBench, there are three specifc data interfaces that you can consider.
python
trainloader, testloader = kp.data.burgers(path, batchsize = 64, sub = 32)
trainloader, testloader = kp.data.shallowwater(path, batchsize = 5, Tin = 10, Tout = 40, sub = 1)
trainloader, testloader = kp.data.navierstokes(path, batchsize = 10, Tin = 10, T_out = 40, type = "1e-3", sub = 1)
Parameter definitions:
path: the file path of the downloaded data set
T_in: the duration length of input data
T_out: the duration length required to predict
Type: the viscosity coefficient of navier-stokes equation data set.
sub: the down-sampling scaling factor. For instance, a scaling factor sub=2 acting on a 2-dimensional data with the spatial resoluion 6464 will create a down-sampled space of 3232. The same factor action on a 1 dimensional data with the spatial resoluion 164 implies a down-sampled space of 132.
``
We recommend that you process your data by PyTorch methodtorch.utils.data.DataLoader. In KNO model, the shape of 2D input data is[batchsize, x, y, tlen]and the shape of output data and label is[batchsize, x, y, T]`, where tlen is defined in kp.model.koopman and T is defined in train module. In Koopman-ViT model, the shape of 2D input data is [batchsize, in_chans, x, y] and the shape of output data and label is [batchsize, out_chans, x, y].
The KoopmanLab provides two training and two testing methods of the compact KNO sub-family. If your scenario is single step prediction, you can consider to use train_single method or use train with T_out = 1. Our package provides a method to save and visualize your prediction results in test.
python
MLP_KNO_2D.train_single(epochs=ep, trainloader = train_loader, evalloader = eval_loader)
MLP_KNO_2D.train(epochs=ep, trainloader = train_loader, evalloader = eval_loader, T_out = T)
MLP_KNO_2D.test_single(test_loader)
MLP_KNO_2D.test(test_loader, T_out = T, path = "./fig/ns_time_error_1e-4/", is_save = True, is_plot = True)
As for the ViT-KNO sub-family, train and test method is set with a single step predicition scenario. Specifically, train_multi and test_multi method provide multi-step iteration prediction, where the model iterates T_out times in training and testing.
``` python
ViTKNO.trainsingle(epochs=ep, trainloader = trainloader, evalloader = evalloader)
ViTKNO.testsingle(testloader)
ViTKNO.trainmulti(epochs=ep, trainloader = trainloader, evalloader = evalloader, Tout = Tout)
ViTKNO.testmulti(testloader)
Parameter definitions:
epoch: epoch number of training
trainloader: dataloader of training, which is returning variable from torch.utils.data.DataLoader
evalloader: dataloader of evaluating, which is returning variable from torch.utils.data.DataLoader
test_loader: dataloader of testing, which is returning variable from torch.utils.data.DataLoader
T_out: the duration length required to predict
Once your model has been trained, you can use the saving module provided in KoopmanLab to save your model. Saved variable has three attribute. where `koopman` is the model class variable (i.e., the saved `kno_model` variable), `model` is the trained model variable (i.e., the saved `kno_model.kernel` variable), and `model_params` is the parameters dictionary of trained model variable (i.e., the saved `kno_model.kernel.state_dict()` variable).
python
MLPKNO2D.save(save_path)
Parameter definitions:
save_path: the file path of the result saving
```
Citation
If you use KoopmanLab package for academic research, you are encouraged to cite the following paper: ``` @article{xiong2024koopman, title={Koopman neural operator as a mesh-free solver of non-linear partial differential equations}, author={Xiong, Wei and Huang, Xiaomeng and Zhang, Ziyang and Deng, Ruixuan and Sun, Pei and Tian, Yang}, journal={Journal of Computational Physics}, pages={113194}, year={2024}, publisher={Elsevier} }
@article{xiong2023koopmanlab, title={Koopmanlab: machine learning for solving complex physics equations}, author={Xiong, Wei and Ma, Muyuan and Huang, Xiaomeng and Zhang, Ziyang and Sun, Pei and Tian, Yang}, journal={APL Machine Learning}, volume={1}, number={3}, year={2023}, publisher={AIP Publishing} } ```
Acknowledgement
Authors appreciate Abby, a talented artist, for designing the logo of KoopmanLab.
License
Citation (CITATION.cff)
cff-version: 1.0.0
message: "If you use KoopmanLab package for academic research, you are encouraged to cite as below."
authors:
- family-names: "Xiong"
given-names: "Wei"
orcid: "https://orcid.org/0000-0002-0099-6050"
- family-names: "Tian"
given-names: "Yang"
orcid: "https://orcid.org/0000-0003-1970-0413"
title: "KoopmanLab: A PyTorch module of Koopman neural operator family for solving partial differential equations"
version: 1.0.1
doi: 10.48550/arXiv.2301.01104
date-released: 2023-01-03
url: "https://github.com/Koopman-Laboratory/KoopmanLab"
preferred-citation:
type: article
authors:
- family-names: "Xiong"
given-names: "Wei"
orcid: "https://orcid.org/0000-0002-0099-6050"
- family-names: "Ma"
given-names: "Muyuan"
- family-names: "Sun"
given-names: "Pei"
- family-names: "Tian"
given-names: "Yang"
orcid: "https://orcid.org/0000-0003-1970-0413"
doi: "10.48550/arXiv.2301.01104"
journal: "arXiv preprint arXiv:2301.01104"
month: 1
title: "KoopmanLab: A PyTorch module of Koopman neural operator family for solving partial differential equations"
year: 2023
GitHub Events
Total
- Issues event: 1
- Watch event: 74
- Issue comment event: 1
- Pull request event: 2
- Fork event: 10
Last Year
- Issues event: 1
- Watch event: 74
- Issue comment event: 1
- Pull request event: 2
- Fork event: 10
Committers
Last synced: almost 3 years ago
All Time
- Total Commits: 125
- Total Committers: 2
- Avg Commits per committer: 62.5
- Development Distribution Score (DDS): 0.056
Top Committers
| Name | Commits | |
|---|---|---|
| Xiong Wei | 1****U@u****m | 118 |
| Yang Tyan (Yang Tian) | 3****g@u****m | 7 |
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 9
- Total pull requests: 1
- Average time to close issues: N/A
- Average time to close pull requests: about 1 hour
- Total issue authors: 9
- Total pull request authors: 1
- Average comments per issue: 0.22
- Average comments per pull request: 0.0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 3
- Pull requests: 1
- Average time to close issues: N/A
- Average time to close pull requests: about 1 hour
- Issue authors: 3
- Pull request authors: 1
- Average comments per issue: 0.33
- Average comments per pull request: 0.0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- metezhang (1)
- Dionysus7777777 (1)
- qiyang77 (1)
- OWENDUNG (1)
- yuguanfeng (1)
- Williamlliw (1)
- linanzhang (1)
- AlderMen (1)
- Youtsing (1)
Pull Request Authors
- ZhangKex1n (2)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 585 last-month
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 4
- Total maintainers: 1
pypi.org: koopmanlab
A library for Koopman Neural Operator with Pytorch
- Homepage: https://github.com/Koopman-Laboratory/KoopmanLab
- Documentation: https://koopmanlab.readthedocs.io/
- License: GNU General Public License v3 (GPLv3)
-
Latest release: 1.0.3
published about 3 years ago
Rankings
Maintainers (1)
Dependencies
- einops ==0.5.0
- h5py ==3.7.0
- matplotlib >=3.3.2
- numpy >=1.14.5
- scipy ==1.7.3
- timm ==0.6.11
- torch >=1.10
- torchvision >=0.13.1