NeuroDiffEq

NeuroDiffEq: A Python package for solving differential equations with neural networks - Published in JOSS (2020)

https://github.com/NeuroDiffGym/neurodiffeq

Science Score: 59.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
    Links to: joss.theoj.org
  • Committers with academic emails
    6 of 22 committers (27.3%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.3%) to scientific vocabulary

Keywords

artificial-intelligence boundary-value-problem deep-learning differential-equations initial-value-problem mathematical-modelling neural-networks ode odes pde-solver physics-informed-neural-networks pinn pypi pytorch scientific-computing time-series

Keywords from Contributors

graph-generation mesh

Scientific Fields

Mathematics Computer Science - 32% confidence
Last synced: 6 months ago · JSON representation

Repository

A library for solving differential equations using neural networks based on PyTorch, used by multiple research groups around the world, including at Harvard IACS.

Basic Info
Statistics
  • Stars: 758
  • Watchers: 23
  • Forks: 97
  • Open Issues: 34
  • Releases: 20
Topics
artificial-intelligence boundary-value-problem deep-learning differential-equations initial-value-problem mathematical-modelling neural-networks ode odes pde-solver physics-informed-neural-networks pinn pypi pytorch scientific-computing time-series
Created almost 7 years ago · Last pushed 7 months ago
Metadata Files
Readme Contributing License

README.md

neurodiffeq

Downloads Codacy Badge PyPI GitHub issues Build Status codecov Documentation Status DOI

Citation

A 2nd NeuroDiffEq paper has been published. Please make sure to cite both if you have been using features that became available after the 2020 version. ``` @article{chen2020neurodiffeq, title={NeuroDiffEq: A Python package for solving differential equations with neural networks}, author={Chen, Feiyu and Sondak, David and Protopapas, Pavlos and Mattheakis, Marios and Liu, Shuheng and Agarwal, Devansh and Di Giovanni, Marco}, journal={Journal of Open Source Software}, volume={5}, number={46}, pages={1931}, year={2020} }

@article{liu2025recent, title={Recent Advances of NeuroDiffEq--An Open-Source Library for Physics-Informed Neural Networks}, author={Liu, Shuheng and Protopapas, Pavlos and Sondak, David and Chen, Feiyu}, journal={arXiv preprint arXiv:2502.12177}, year={2025} } ```


🔥🔥🔥Did you know that neurodiffeq supports solution bundles and can be used to solve reverse problems? See here!

:mortarboard: Already familiar with neurodiffeq? :pointdown: Jump to FAQs.


Introduction

neurodiffeq is a package for solving differential equations with neural networks. Differential equations are equations that relate some function with its derivatives. They emerge in various scientific and engineering domains. Traditionally these problems can be solved by numerical methods (e.g. finite difference, finite element). While these methods are effective and adequate, their expressibility is limited by their function representation. It would be interesting if we can compute solutions for differential equations that are continuous and differentiable.

As universal function approximators, artificial neural networks have been shown to have the potential to solve ordinary differential equations (ODEs) and partial differential equations (PDEs) with certain initial/boundary conditions. The aim of neurodiffeq is to implement these existing techniques of using ANN to solve differential equations in a way that allow the software to be flexible enough to work on a wide range of user-defined problems.

Installation

Using pip

Like most standard libraries, neurodiffeq is hosted on PyPI. To install the latest stable relesase,

bash pip install -U neurodiffeq # '-U' means update to latest version

Manually

Alternatively, you can install the library manually to get early access to our new features. This is the recommended way for developers who want to contribute to the library.

bash git clone https://github.com/NeuroDiffGym/neurodiffeq.git cd neurodiffeq && pip install -r requirements pip install . # To make changes to the library, use `pip install -e .` pytest tests/ # Run tests. Optional.

Getting Started

We are happy to help you with any questions. In the meantime, you can checkout the FAQs.

To view complete tutorials and documentation of neurodiffeq, please check Official Documentation.

In addition to the documentations, we have recently made a quick walkthrough Demo Video with slides.

Example Usages

Imports

python from neurodiffeq import diff from neurodiffeq.solvers import Solver1D, Solver2D from neurodiffeq.conditions import IVP, DirichletBVP2D from neurodiffeq.networks import FCNN, SinActv

ODE System Example

Here we solve a non-linear system of two ODEs, known as the Lotka–Volterra equations. There are two unknown functions (u and v) and a single independent variable (t).

```python def ode_system(u, v, t): return [diff(u,t)-(u-uv), diff(v,t)-(uv-v)]

conditions = [IVP(t0=0.0, u0=1.5), IVP(t0=0.0, u0=1.0)] nets = [FCNN(actv=SinActv), FCNN(actv=SinActv)]

solver = Solver1D(odesystem, conditions, tmin=0.1, tmax=12.0, nets=nets) solver.fit(maxepochs=3000) solution = solver.get_solution() ```

solution is a callable object, you can pass in numpy arrays or torch tensors to it like

python u, v = solution(t, to_numpy=True) # t can be np.ndarray or torch.Tensor

Plotting u and v against their analytical solutions yields something like:

lotka–volterra-solution

PDE System Example

Here we solve a Laplace Equation with Dirichlet boundary conditions on a rectangle. Note that we choose Laplace equation for its simplicity of computing analytical solution. In practice, you can attempt any nonlinear, chaotic PDEs, provided you tune the solver well enough.

Solving a 2-D PDE system is quite similar to solving ODEs, except there are two variables x and y for boundary value problems or x and t for initial boundary value problems, both of which are supported.

```python def pde_system(u, x, y): return [diff(u, x, order=2) + diff(u, y, order=2)]

conditions = [ DirichletBVP2D( xmin=0, xminval=lambda y: torch.sin(np.pi*y), xmax=1, xmaxval=lambda y: 0,
ymin=0, yminval=lambda x: 0,
y
max=1, ymaxval=lambda x: 0,
) ] nets = [FCNN(ninputunits=2, noutputunits=1, hidden_units=(512,))]

solver = Solver2D(pdesystem, conditions, xymin=(0, 0), xymax=(1, 1), nets=nets) solver.fit(maxepochs=2000) solution = solver.get_solution() ```

The signature of solution for a 2D PDE is slightly different from that of an ODE. Again, it takes in either numpy arrays or torch tensors.

python u = solution(x, y, to_numpy=True) Evaluating u on [0,1] × [0,1] yields the following plots

| ANN-Based Solution | Residual of PDE | | :-------------------------------------------------: | :----------------------------------------------------------: | | laplace-solution | laplace-error |

Using a Monitor

A monitor is a tool for visualizing PDE/ODE solutions as well as history of loss and custom metrics during training. Jupyter Notebooks users need to run the %matplotlib notebook magic. For Jupyter Lab users, try %matplotlib widget.

python from neurodiffeq.monitors import Monitor1D ... monitor = Monitor1D(t_min=0.0, t_max=12.0, check_every=100) solver.fit(..., callbacks=[monitor.to_callback()])

You should see the plots update every 100 epoch as well as on the last epoch, showing two plots — one for solution visualization on the interval [0,12] and the other for loss history (training and validation).

monitor

Custom Networks

For convenience, we have implemented an FCNN – fully-connected neural network, whose hidden units and activation functions can be customized.

```python from neurodiffeq.networks import FCNN

Default: ninputunits=1, noutputunits=1, hidden_units=[32, 32], activation=torch.nn.Tanh

net1 = FCNN(ninputunits=..., noutputunits=..., hidden_units=[..., ..., ...], activation=...) ... nets = [net1, net2, ...] ```

FCNN is usually a good starting point. For advanced users, solvers are compatible with any custom torch.nn.Module. The only constraints are:

  1. The modules takes in a tensor of shape (None, n_coords) and the outputs a tensor of shape (None, 1).

  2. There must be a total of n_funcs modules in nets to be passed to solver = Solver(..., nets=nets).

monitor

Acutally, neurodiffeq has a *single_net** feature that doesn't obey the above rules, which won't be covered here.*

Read the PyTorch tutorial on building your own network (a.k.a module) architecture.

Transfer Learning

Transfer learning is easily done by serializing old_solver.nets (a list of torch modules) to disk and then loading them and passing to a new solver:

```python oldsolver.fit(maxepochs=...)

... dump old_solver.nets to disk

... load the networks from disk, store them in some loaded_nets variable

newsolver = Solver(..., nets=loadednets) newsolver.fit(maxepochs=...) ```

We currently working on wrapper functions to save/load networks and other internal variables of Solvers. In the meantime, you can read the PyTorch tutorial on saving and loading your networks.

Sampling Strategies

In neurodiffeq, the networks are trained by minimizing loss (ODE/PDE residuals) evaluated on a set of points in the domain. The points are randonly resampled every time. To control the number, distribution, and bounding domain of sampled points, you can specify your own training/valiadation generators.

```python from neurodiffeq.generators import Generator1D

Default tmin=0.0, tmax=1.0, method='uniform', noise_std=None

g1 = Generator1D(size=..., tmin=..., tmax=..., method=..., noisestd=...) g2 = Generator1D(size=..., tmin=..., tmax=..., method=..., noisestd=...)

solver = Solver1D(..., traingenerator=g1, validgenerator=g2) ```

Here are some sample distributions of a Generator1D.

| Generator1D(8192, 0.0, 1.0, method='uniform') | Generator1D(8192, -1.0, 0.0, method='log-spaced-noisy', noise_std=1e-3) | | :-------------------------------------------------------: | :----------------------------------------------------------: | | generator1d-uniform | generator1d-log-spaced-noisy |

Note that when both train_generator and valid_generator are specified, t_min and t_max can be omitted in Solver1D(...). In fact, even if you pass t_min, t_max, train_generator, valid_generator together, the t_min and t_max will still be ignored.

Combining Generators

Another nice feature of the generators is that you can concatenate them, for example

python g1 = Generator2D((16, 16), xy_min=(0, 0), xy_max=(1, 1)) g2 = Generator2D((16, 16), xy_min=(1, 1), xy_max=(2, 2)) g = g1 + g2

Here, g will be a generator that outputs the combined samples of g1 and g2

| g1 | g2 | g1 + g2 | | :-------------------------------------------: | :-------------------------------------------: | :-----------------------------------------------------: | | generator2d-1 | generator2d-2 | generator2d-concat |

Sampling Higher Dimensions

You can use Generator2D, Generator3D, etc. for sampling points in higher dimensions. But there's also another way

python g1 = Generator1D(1024, 2.0, 3.0, method='uniform') g2 = Generator1D(1024, 0.1, 1.0, method='log-spaced-noisy', noise_std=0.001) g = g1 * g2

Here, g will be a generator which yields 1024 points in a 2-D rectangle (2,3) × (0.1,1) every time. The x-coordinates of them are drawn from (2,3) using strategy uniform and the y-coordinate drawn from (0.1,1) using strategy log-spaced-noisy.

| g1 | g2 | g1 * g2 | | :---------------------------------------------: | :---------------------------------------------: | :----------------------------------------------------------: | | generator2d-1 | generator2d-2 | generator2d-concat |

Solution Bundle and Reverse Problems

Sometimes, it is interesting to solve a bundle of equations at once. For example, you may want to solve differential equations of the form du/dt + λu = 0 under the initial condition u(0) = U0. You may want to solve this for all λ and U0 at once, by treating them as inputs to the neural networks.

One such application is for chemical reactions, where the reaction rate is unknown. Different reaction rates correspond to different solutions, and only one solution matches observed data points. You maybe interested in first solving for a bundle of solutions, and then determining the best reaction rates (aka equation parameters). The second step is known as the inverse problem.

Here's an example of how to do this using neurodiffeq:

  1. Let's say we have an equation du/dt + λu = 0 and initial condition u(0) = U0 where λ and U0 are unknown constants. We also have a set of observations t_obs and u_obs. We first import BundleSolver and BundleIVP which is necessary to obtaining a solution bundle:

```python from neurodiffeq.conditions import BundleIVP from neurodiffeq.solvers import BundleSolver1D

import matplotlib.pyplot as plt import numpy as np import torch from neurodiffeq import diff ```

  1. We determine the domain of input t, as well as the domain of parameters λ and U0. We also need to make a decision of the order of the parameters. Namely, which should be the first parameter, and which should be the second. For the purpose of this demo, we choose λ to be the first parameter (index 0), and U0 to be the second (index 1). It is very important to keep track of the indices of the parameters.

python T_MIN, T_MAX = 0, 1 LAMBDA_MIN, LAMBDA_MAX = 3, 5 # first parameter, index = 0 U0_MIN, U0_MAX = 0.2, 0.6 # second parameter, index = 1

  1. We then define the conditions and solver as usual, except that we use BundleIVP and BundleSolver1D instead of IVP and Solver1D. The interface of these two is very similar to IVP and Solver1D. You can find out more in the API reference.

```python # equation parameters comes after inputs (usually temporal and spatial coordinates) diff_eq = lambda u, t, lmd: [diff(u, t) + lmd * u]

# The keyword argument must be named "u0" in BundleIVP. If you use anything else, e.g. y0, u0, etc., it won't work. conditions = [ BundleIVP(t0=0, u0=None, bundleparamlookup={'u0': 1}) # u_0 has index 1 ]

solver = BundleSolver1D( odesystem=diffeq, conditions=conditions, tmin=TMIN, tmax=TMAX, thetamin=[LAMBDAMIN, U0MIN], # λ has index 0; u0 has index 1 thetamax=[LAMBDAMAX, U0MAX], # λ has index 0; u0 has index 1 eqparamindex=(0,), # λ is the only equation parameter, which has index 0 nbatchesvalid=1, ) ```

Since λ is a parameter in the equation and U0 is a parameter in the initial condition, we must include λ in the diff_eq and U0 in the condition. If a parameter is present in both the equation and the condition, it must be included in both places. All elements of conditions passed to BundleSovler1D must be Bundle* conditions, even if they don't have parameters.

  1. Now, we can train it and obtain the solution as we normally would.

python solver.fit(max_epochs=1000) solution = solver.get_solution(best=True)

The solution expects three inputs - t, λ and U0. All inputs must have the same shape. For example, if you are interested in fixing λ=4 and U0=0.4 and plotting the solution u against t ∈ [0,1] , you can do the following

```python t = np.linspace(0, 1) lmd = 4 * np.oneslike(t) u0 = 0.4 * np.oneslike(t)

u = solution(t, lmd, u0, to_numpy=True)

import matplotlib.pyplot as plt plt.plot(t, u) ```

  1. Once you have a bundled solution, you can find a set of parameters (λ, U0) that matches observed data points (t_i, u_i) most closely. This is achieved using simple gradient descent. In the following toy example, we assume there are only three data points u(0.2) = 0.273, u(0.5)=0.129, and u(0.8) = 0.0609. The following is classical PyTorch workflow.

```python # observed data points tobs = torch.tensor([0.2, 0.5, 0.8]).reshape(-1, 1) uobs = torch.tensor([0.273, 0.129, 0.0609]).reshape(-1, 1)

# random intialization of λ and U0; keep track of their gradient lmdtensor = torch.rand(1) * (LAMBDAMAX - LAMBDAMIN) + LAMBDAMIN u0tensor = torch.rand(1) * (U0MAX - U0MIN) + U0MIN adam = torch.optim.Adam([lmdtensor.requiresgrad(True), u0tensor.requiresgrad(True)], lr=1e-2)

# run gradient descent for 10000 epochs for _ in range(10000): output = solution(tobs, lmdtensor * torch.oneslike(tobs), u0tensor * torch.oneslike(tobs)) loss = ((output - uobs) ** 2).mean() loss.backward() adam.step() adam.zero_grad()

print(f"λ = {lmdtensor.item()}, U0={u0tensor.item()}, loss = {loss.item()}") ```

FAQ

Q: How to use GPU for training?

Simple. When importing neurodiffeq, the library automatically detects if CUDA is available on your machine. Since the library is based on PyTorch, it will set default tensor type to torch.cuda.DoubleTensor for if a compatible GPU device is found.

Q: How to use pretrained nets?

Refer to Sections Custom Networks and Transfer Learning.

Q: How to change the learning rate?

The standard PyTorch way.

  1. Build your networks as explained in Custom Networks: nets = [FCNN(), FCN(), ...]

  2. Instantiate a custom optimizer and pass all parameters of these networks to it

python parameters = [p for net in nets for p in net.parameters()] # list of paramters of all networks MY_LEARNING_RATE = 5e-3 optimizer = torch.optim.Adam(parameters, lr=MY_LEARNING_RATE, ...)

  1. Pass BOTH your nets and your optimizer to the solver: solver = Solver1D(..., nets=nets, optimizer=optimizer)

Q: I got a bad solution.

Unlike traditional numerial methods (FEM, FVM, etc.), the NN-based solution requires some hypertuning. The library offers the utmost flexibility to try any combination of hyperparameters.

  • To use a different network architecture, you can pass in your custom torch.nn.Modules.
  • To use a different optimizer, you can pass in your own optimizer to solver = Solver(..., optimizer=my_optim).
  • To use a different sampling distribution, you can use built-in generators or write your own generators from scratch.
  • To use a different sampling size, you can tweak the generators or change solver = Solver(..., n_batches_train).
  • To dynamically change hyperparameters during training, checkout our callbacks feature.

Q: Any rules of thumbs?

  • Don't use ReLU for activation, because its second-order derivative is identically 0.
  • Re-scale your PDE/ODE in dimensionless form, preferably make everything range in [0,1]. Working with a domain like [0,1000000] is prone to failure because a) PyTorch initializes the modules weights to be relatively small and b) most activation functions (like Sigmoid, Tanh, Swish) are most nonlinear near 0.
  • If your PDE/ODE is too complicated, consider trying curriculum learning. Start training your networks on a smaller domain, and then gradually expand until the whole domain is covered.

Contributing

Everyone is welcome to contribute to this project.

When contributing to this repository, we consider the following process:

  1. Open an issue to discuss the change you are planning to make.
  2. Go through Contribution Guidelines.
  3. Make the change on a forked repository and update the README.md if changes are made to the interface.
  4. Open a pull request.

Owner

  • Name: NeuroDiffGym
  • Login: NeuroDiffGym
  • Kind: organization

NeuroDiffGym, Harvard IACS

GitHub Events

Total
  • Release event: 1
  • Watch event: 67
  • Member event: 1
  • Push event: 7
  • Pull request event: 2
  • Fork event: 10
  • Create event: 3
Last Year
  • Release event: 1
  • Watch event: 67
  • Member event: 1
  • Push event: 7
  • Pull request event: 2
  • Fork event: 10
  • Create event: 3

Committers

Last synced: 7 months ago

All Time
  • Total Commits: 1,112
  • Total Committers: 22
  • Avg Commits per committer: 50.545
  • Development Distribution Score (DDS): 0.423
Past Year
  • Commits: 7
  • Committers: 1
  • Avg Commits per committer: 7.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Shuheng Liu w****4@i****m 642
Feiyu f****n@g****u 301
Shivas Jayaram s****j@g****m 61
Joy Parikh f****6@p****n 27
David Sondak d****k@s****u 15
Ashley l****4@n****u 12
Augusto Chantada a****a@h****m 11
Sathvik Bhagavan s****n@g****m 9
Sakzsee s****v@g****m 7
dependabot[bot] 4****] 6
Elaine m****a@g****u 5
ashleyWithMarvin a****i@m****m 4
Devansh Agarwal d****v@g****m 3
Elaine e****a@E****l 1
João Esteves j****7@i****k 1
Lakshay Chawla 5****3 1
Marco m****i@p****t 1
Ravin Kumar 1****n 1
The Codacy Badger b****r@c****m 1
jdelpiano j****o@g****m 1
matinmoezzi m****i@o****m 1
pasquale c 3****k@o****t 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 51
  • Total pull requests: 72
  • Average time to close issues: 5 months
  • Average time to close pull requests: 25 days
  • Total issue authors: 28
  • Total pull request authors: 16
  • Average comments per issue: 3.18
  • Average comments per pull request: 0.63
  • Merged pull requests: 48
  • Bot issues: 0
  • Bot pull requests: 15
Past Year
  • Issues: 1
  • Pull requests: 1
  • Average time to close issues: N/A
  • Average time to close pull requests: less than a minute
  • Issue authors: 1
  • Pull request authors: 1
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • shuheng-liu (12)
  • troyrock (3)
  • ma-sadeghi (3)
  • sathvikbhagavan (2)
  • udemirezen (2)
  • feiyu-chen96 (2)
  • Arup-nit (2)
  • HuynhTran0301 (2)
  • ptflores1 (2)
  • katayooneshkofti (2)
  • devanshkv (2)
  • LeonShe (1)
  • pabloacera (1)
  • dreivmeister (1)
  • Zymrael (1)
Pull Request Authors
  • shuheng-liu (24)
  • dependabot[bot] (15)
  • sathvikbhagavan (11)
  • Lakshay-13 (8)
  • mecunha (3)
  • AshleighBi (3)
  • at-chantada (3)
  • Rockdeldiablo (2)
  • mr-ravin (2)
  • P-Kshitij (2)
  • jdelpiano (2)
  • samuelperezdi (2)
  • feiyu-chen96 (1)
  • shivasj (1)
  • devanshkv (1)
Top Labels
Issue Labels
enhancement (13) bug (4) code quality (4) question (2) usage tips (2) good first issue (1) docs (1)
Pull Request Labels
dependencies (15) enhancement (1)

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 293 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 1
  • Total versions: 20
  • Total maintainers: 1
pypi.org: neurodiffeq

A light-weight & flexible library for solving differential equations using neural networks based on PyTorch.

  • Versions: 20
  • Dependent Packages: 0
  • Dependent Repositories: 1
  • Downloads: 293 Last month
Rankings
Stargazers count: 2.7%
Forks count: 5.3%
Dependent packages count: 10.1%
Average: 10.1%
Downloads: 10.7%
Dependent repos count: 21.6%
Maintainers (1)
Last synced: 6 months ago

Dependencies

Pipfile pypi
  • dill *
  • jupyter *
  • matplotlib *
  • numpy *
  • packaging *
  • pandas *
  • pytest *
  • requests *
  • scipy *
  • seaborn *
  • tensorboard *
  • torch *
  • wheel *
Pipfile.lock pypi
  • 109 dependencies
docs/requirements.txt pypi
  • doc2dash *
  • ipython *
  • mock *
  • nbconvert *
  • nbsphinx *
  • pandoc *
  • recommonmark *
  • sphinx *
  • sphinx_rtd_theme *
requirements.txt pypi
  • dill *
  • matplotlib *
  • numpy *
  • packaging *
  • pandas *
  • pytest *
  • requests *
  • scipy *
  • seaborn *
  • tensorboard *
  • torch *
  • tqdm *
  • wheel *
docker/Dockerfile docker
  • python 3.8-slim-buster build
setup.py pypi
.github/workflows/CI.yml actions
  • actions/checkout v4 composite
  • actions/setup-python v4 composite
  • codecov/codecov-action v3 composite