Φ-ML
Φ-ML: Intuitive Scientific Computing with Dimension Types for Jax, PyTorch, TensorFlow & NumPy - Published in JOSS (2024)
Science Score: 98.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 9 DOI reference(s) in README and JOSS metadata -
✓Academic publication links
Links to: joss.theoj.org -
✓Committers with academic emails
2 of 6 committers (33.3%) from academic institutions -
✓Institutional organization owner
Organization tum-pbs has institutional domain (ge.in.tum.de) -
✓JOSS paper metadata
Published in Journal of Open Source Software
Scientific Fields
Repository
Intuitive scientific computing with dimension types for Jax, PyTorch, TensorFlow & NumPy
Basic Info
- Host: GitHub
- Owner: tum-pbs
- License: mit
- Language: Python
- Default Branch: main
- Homepage: https://tum-pbs.github.io/PhiML/
- Size: 28.6 MB
Statistics
- Stars: 90
- Watchers: 5
- Forks: 12
- Open Issues: 4
- Releases: 28
Metadata Files
README.md

ΦML
ΦML is a math and neural network library designed for science applications. It enables you to quickly evaluate many network architectures on your data sets, perform linear and non-linear optimization, and write differentiable simulations. ΦML is compatible with Jax, PyTorch, TensorFlow and NumPy and your code can be executed on all of these backends.
📖 Documentation
• 🔗 API
• ▶ Videos
•
Introduction
•
Examples
Installation
Installation with pip on Python 3.6 and later:
bash
$ pip install phiml
Install PyTorch, TensorFlow or Jax to enable machine learning capabilities and GPU execution.
For optimal GPU performance, you may compile the custom CUDA operators, see the detailed installation instructions.
You can verify your installation by running
bash
$ python3 -c "import phiml; phiml.verify()"
This will check for compatible PyTorch, Jax and TensorFlow installations as well.
Why should I use ΦML?
Unique features
- Preconditioned (sparse) linear solves: ΦML can build sparse matrices from your Python functions and run linear solvers with preconditioners.
- n-dimensional operations: With ΦML, you can write code that automatically works in 1D, 2D and 3D, choosing the corresponding operations based on the input dimensions.
- Flexible neural network architectures: ΦML provides various configurable neural network architectures, from MLPs to U-Nets.
- Non-uniform tensors: ΦML allows you to stack tensors of different sizes and keeps track of the resulting shapes.
Compatibility
- Writing code that works with PyTorch, Jax, and TensorFlow makes it easier to share code with other people and collaborate.
- Your published research code will reach a broader audience.
- When you run into a bug / roadblock with one library, you can simply switch to another.
- ΦML can efficiently convert tensors between ML libraries on-the-fly, so you can even mix the different ecosystems.
Fewer mistakes
- No more data type troubles: ΦML automatically converts data types where needed and lets you specify the FP precision globally or by context!
- No more reshaping troubles: ΦML performs reshaping under-the-hood.
- Is
neighbor_idx.at[jnp.reshape(idx, (-1,))].set(jnp.reshape(cell_idx, (-1,) + cell_idx.shape[-2:]))correct?: ΦML provides a custom Tensor class that lets you write easy-to-read, more concise, more explicit, less error-prone code.
What parts of my code are library-agnostic?
With ΦML, you can write a full neural network training script that can run with Jax, PyTorch and TensorFlow. In particular, ΦML provides abstractions for the following functionality:
- Neural network creation and optimization
- Math functions and tensor operations
- Sparse tensors / matrices
- Just-in-time (JIT) compilation
- Computing gradients of functions via automatic differentiation
However, ΦML does not currently abstract the following use cases:
- Custom or non-standard network architectures or optimizers require backend-specific code.
- ΦML abstracts compute devices but does not currently allow mapping operations onto multiple GPUs.
- ΦML has no data loading module. However, it can convert data, once loaded, to any other backend.
- Some less-used math functions have not been wrapped yet. If you come across one you need, feel free to open an issue.
- Higher-order derivatives are not supported for all backends.
ΦML's Tensor class
Many of ΦML's functions can be called on native tensors, i.e. Jax/PyTorch/TensorFlow tensors and NumPy arrays. In these cases, the function maps to the corresponding one from the matching backend.
However, we have noticed that code written this way is often hard-to-read, verbose and error-prone. One main reason is that dimensions are typically referred to by index and the meaning of that dimension might not be obvious (for examples, see here, here or here).
ΦML includes a Tensor class with the goal to remedy these shortcomings.
A ΦML Tensor wraps one of the native tensors, such as ndarray, torch.Tensor or tf.Tensor, but extends them by two features:
- Names: All dimensions are named. Referring to a specific dimension can be done as
tensor.<dimension name>. Elements along dimensions can also be named. - Types: Every dimension is assigned a type flag, such as channel, batch or spatial.
For a full explanation of why these changes make your code not only easier to read but also shorter, see here. Here's the gist:
- With dimension names, the dimension order becomes irrelevant and you don't need to worry about it.
- Missing dimensions are automatically added when and where needed.
- Tensors are automatically transposed to match.
- Slicing by name is a lot more readable, e.g.
image.channels['red']vsimage[:, :, :, 0]. - Functions will automatically use the right dimensions, e.g. convolutions and FFTs act on spatial dimensions by default.
- You can have arbitrarily many batch dimensions (or none) and your code will work the same.
- The number of spatial dimensions control the dimensionality of not only your data but also your code. Your 2D code also runs in 3D!
Examples
The following three examples are taken from the
examples notebook where you can also find examples on automatic differentiation, JIT compilation, and more.
You can change the math.use(...) statements to any of the supported ML libraries.
Training an MLP
The following script trains an MLP with three hidden layers to learn a noisy 1D sine function in the range [-2, 2].
```python from phiml import math, nn math.use('torch')
net = nn.mlp(1, 1, layers=[128, 128, 128], activation='ReLU') optimizer = nn.adam(net, learning_rate=1e-3)
datax = math.randomuniform(math.batch(batch=128), low=-2, high=2) datay = math.sin(datax) + math.random_normal(math.batch(batch=128)) * .2
def lossfunction(x, y): return math.l2loss(y - math.native_call(net, x))
for i in range(100): loss = nn.updateweights(net, optimizer, lossfunction, datax, datay) print(loss) ```
We didn't even have to import torch in this example since all calls were routed through ΦML.
Solving a sparse linear system with preconditioners
ΦML supports solving dense as well as sparse linear systems and can build an explicit matrix representation from linear Python functions in order to compute preconditioners.
We recommend using ΦML's tensors, but you can pass native tensors to solve_linear() as well.
The following example solves the 1D Poisson problem ∇x = b with b=1 with incomplete LU decomposition.
```python from phiml import math import numpy as np
def laplace_1d(x): return math.pad(x[1:], (0, 1)) + math.pad(x[:-1], (1, 0)) - 2 * x
b = np.ones((6,)) solve = math.Solve('scipy-CG', reltol=1e-5, x0=0*b, preconditioner='ilu') sol = math.solvelinear(math.jitcompilelinear(laplace_1d), b, solve) ```
Decorating the linear function with math.jit_compile_linear lets ΦML compute the sparse matrix inside solve_linear(). In this example, the matrix is a tridiagonal band matrix.
Note that if you JIT-compile the math.solve_linear() call, the sparsity pattern and incomplete LU preconditioner are computed at JIT time.
The L and U matrices then enter the computational graph as constants and are not recomputed every time the function is called.
Contributions
Contributions are welcome!
If you find a bug, feel free to open a GitHub issue or get in touch with the developers. If you have changes to be merged, check out our style guide before opening a pull request.
📄 Citation
Please use the following citation:
@article{Holl2024,
doi = {10.21105/joss.06171},
url = {https://doi.org/10.21105/joss.06171},
year = {2024},
publisher = {The Open Journal},
volume = {9},
number = {95},
pages = {6171},
author = {Philipp Holl and Nils Thuerey},
title = {Φ-ML: Intuitive Scientific Computing with Dimension Types for Jax, PyTorch, TensorFlow & NumPy},
journal = {Journal of Open Source Software}
}
Also see the corresponding journal article and software archive of version 1.4.0.
Projects using ΦML
ΦML is used by the simulation framework ΦFlow to integrate differentiable simulations with machine learning.
Owner
- Name: TUM Physics-based Simulation
- Login: tum-pbs
- Kind: organization
- Location: Munich!
- Website: https://ge.in.tum.de
- Repositories: 16
- Profile: https://github.com/tum-pbs
JOSS Publication
Φ-ML: Intuitive Scientific Computing with Dimension Types for Jax, PyTorch, TensorFlow & NumPy
Authors
Tags
Machine Learning Jax TensorFlow PyTorch NumPy Differentiable simulations Sparse linear systems PreconditionersGitHub Events
Total
- Create event: 8
- Issues event: 3
- Release event: 7
- Watch event: 22
- Issue comment event: 2
- Push event: 229
- Pull request event: 2
- Fork event: 4
Last Year
- Create event: 8
- Issues event: 3
- Release event: 7
- Watch event: 22
- Issue comment event: 2
- Push event: 229
- Pull request event: 2
- Fork event: 4
Committers
Last synced: 5 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Philipp Holl | p****l@t****e | 649 |
| Philipp Holl | p****l@e****m | 266 |
| eliasdjo | e****u@g****m | 2 |
| Felix Köhler | 2****n | 1 |
| Dxyk | d****7@g****m | 1 |
| Elias Djossou | e****u@t****e | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 4 months ago
All Time
- Total issues: 5
- Total pull requests: 8
- Average time to close issues: 4 months
- Average time to close pull requests: about 1 month
- Total issue authors: 4
- Total pull request authors: 6
- Average comments per issue: 0.8
- Average comments per pull request: 0.88
- Merged pull requests: 7
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 3
- Pull requests: 2
- Average time to close issues: N/A
- Average time to close pull requests: 3 months
- Issue authors: 2
- Pull request authors: 1
- Average comments per issue: 0.67
- Average comments per pull request: 0.0
- Merged pull requests: 2
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- sambaPython24 (2)
- mcleantom (1)
- dgleich (1)
- kmario23 (1)
Pull Request Authors
- holl- (3)
- andreinechaev (2)
- Ceyron (2)
- Dxyk (2)
- eliasdjo (2)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 941 last-month
- Total dependent packages: 1
- Total dependent repositories: 1
- Total versions: 38
- Total maintainers: 1
pypi.org: phiml
Unified API for machine learning
- Homepage: https://github.com/tum-pbs/PhiML
- Documentation: https://phiml.readthedocs.io/
- License: MIT
-
Latest release: 1.14.1
published 5 months ago
Rankings
Maintainers (1)
Dependencies
- actions/checkout v2 composite
- actions/setup-python v2 composite
- JamesIves/github-pages-deploy-action 4.1.4 composite
- actions/checkout v2.3.1 composite
- actions/setup-python v2 composite
- numpy *
- packaging *
- scipy >=1.5.4
