https://github.com/arunoruto/lumafit
A Numba-accelerated Levenberg-Marquardt fitting library
Science Score: 26.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (15.9%) to scientific vocabulary
Keywords
Repository
A Numba-accelerated Levenberg-Marquardt fitting library
Basic Info
- Host: GitHub
- Owner: arunoruto
- License: mit
- Language: Python
- Default Branch: main
- Homepage: https://arunoruto.github.io/lumafit/
- Size: 272 KB
Statistics
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 5
Topics
Metadata Files
README.md
-->
lumafit
A Numba-accelerated Levenberg-Marquardt fitting library for Python.
Optimized for pixel-wise fitting on 3D image data.
<!-- Optional: Add links to documentation if you create it later -->
<!-- Explore the docs » -->
Report Bug
·
Request Feature
Table of Contents
About The Project
This library provides a high-performance implementation of the Levenberg-Marquardt (LM) algorithm for non-linear least squares fitting, accelerated using Numba.
The primary motivation for lumafit is to efficiently perform fitting tasks on large multi-dimensional datasets, such as fitting a curve along the third dimension for every pixel in a 3D image stack. Numba's Just-In-Time (JIT) compilation and parallel processing capabilities (numba.prange) are leveraged to drastically reduce computation time compared to pure Python implementations.
Key features:
- Core Levenberg-Marquardt algorithm (
levenberg_marquardt_core). - Specialized function for fitting curves pixel-wise on 3D data (
levenberg_marquardt_pixelwise) with parallel execution. - Support for weighted least squares.
- Numerical Jacobian calculation via finite differences (Numba-accelerated).
- Implementation based on standard LM algorithm formulations.
Built With
Getting Started
To get a local copy of lumafit up and running, follow these simple steps.
Prerequisites
You need Python 3.9+ installed. Using a virtual environment is recommended.
You will also need standard Python build tools, typically included with pip.
```sh python -m venv venv source venv/bin/activate # On Linux/macOS
venv\Scripts\activate # On Windows
```
Installation
- Clone the repository:
sh git clone https://github.com/arunoruto/lumafit.git cd lumafit - Install the package using
pip, which will use thepyproject.tomlfile:sh pip install .If you want to install dependencies required for running tests, use the[test]extra:sh pip install .[test]
Usage
The library provides two main functions: levenberg_marquardt_core for fitting a single curve and levenberg_marquardt_pixelwise for fitting a 3D data stack.
First, you need to define your model function. Your model function must be compatible with Numba's nopython=True mode. This means it should primarily use NumPy functions and basic Python constructs supported by Numba.
```python
Example model function (exponential decay + another exponential decay)
import numpy as np from numba import jit
@jit(nopython=True, cache=True) def my_model(t, p): """ My example non-linear model.
Args:
t (np.ndarray): Independent variable (1D array).
p (np.ndarray): Parameters (1D array), e.g., [A1, tau1, A2, tau2].
Returns:
np.ndarray: Model output evaluated at t.
"""
# Add checks for potential zero divisors if parameters are in denominators
term1 = np.zeros_like(t, dtype=np.float64)
if np.abs(p[1]) > 1e-12: # Avoid division by zero or near-zero
term1 = p[0] * np.exp(-t / p[1])
term2 = np.zeros_like(t, dtype=np.float64)
if np.abs(p[3]) > 1e-12:
term2 = p[2] * np.exp(-t / p[3])
return term1 + term2
Or the polarization model:
@jit(nopython=True, cache=True)
def polarization_model(t, p):
# ... (your polarization model code from tests/lmba.py)
t_rad = t * np.pi / 180.0
p3_rad = p[3] * np.pi / 180.0
sintradarr = np.sin(trad)
term1base = sintradarr.copy()
maskproblematicbase = (np.abs(term1_base) < 1e-15) & (p[1] < 0.0) # Using a small epsilon
term1base[maskproblematic_base] = 1e-15 # Replace near-zero with tiny number
term1 = np.power(term1_base, p[1])
term2 = np.power(np.cos(t_rad / 2.0), p[2])
term3 = np.sin(trad - p3rad)
return p[0] * term1 * term2 * term3
```
Fitting a single curve
If you have a single 1D array of data (y_data) corresponding to independent variable values (t_data), you can use levenberg_marquardt_core. This function is the core engine for minimizing the difference between your model and the data.
```python import numpy as np from lmba import levenbergmarquardtcore
Assume my_model is defined and JIT-compiled above
Generate some synthetic data (replace with your actual data)
tdata = np.linspace(0.1, 25, 100, dtype=np.float64) ptrue = np.array([5.0, 2.0, 2.0, 10.0], dtype=np.float64) yclean = mymodel(tdata, ptrue) noise = np.random.defaultrng(42).normal(0, 0.1, size=tdata.shape).astype(np.float64) ydata = (yclean + noise).astype(np.float64)
Initial guess for parameters
p_initial = np.array([4.0, 1.5, 1.5, 8.0], dtype=np.float64)
Optional: weights (e.g., inverse variance if noise std is known)
weights = 1.0 / (0.1**2 + np.finfo(float).eps) # Assuming noise_std = 0.1
Run the fit
pfit, cov, chi2, iters, conv = levenbergmarquardtcore( mymodel, # Your Numba-compiled model function tdata, # Independent variable data (1D array) ydata, # Dependent variable data (1D array) pinitial, # Initial guess (1D array) weights=weights, # Optional weights (1D array or None) maxiter=1000, # Max iterations tolg=1e-7, # Gradient tolerance tolp=1e-7, # Parameter change tolerance tol_c=1e-7, # Chi-squared change tolerance # ... other optional parameters )
print(f"Fit converged: {conv}") print(f"Iterations: {iters}") print(f"Final Chi-squared: {chi2}") print(f"Fitted parameters: {p_fit}")
print(f"Covariance matrix: {cov}") # Covariance can be large
```
Fitting pixel-wise on 3D data
For a 3D NumPy array where each (row, col) location has a curve along the third dimension (data_cube[row, col, :]), use levenberg_marquardt_pixelwise. This function parallelizes the fitting process across the row and col dimensions using numba.prange.
```python import numpy as np from lmba import levenbergmarquardtpixelwise
Assume my_model is defined and JIT-compiled above
Generate some synthetic 3D data (replace with your actual data)
rows, cols, depth = 100, 100, 50 # Example dimensions tdata = np.linspace(0.1, 25, depth, dtype=np.float64) datacube = np.empty((rows, cols, depth), dtype=np.float64)
ptruebase = np.array([5.0, 2.0, 2.0, 10.0], dtype=np.float64) rng = np.random.default_rng(42)
for ridx in range(rows): for cidx in range(cols): # Vary true params slightly per pixel ppixeltrue = ptruebase * (1 + rng.uniform(-0.05, 0.05, size=ptruebase.shape)) ycleanpixel = mymodel(tdata, ppixeltrue) noisepixel = rng.normal(0, 0.1, size=depth).astype(np.float64) datacube[ridx,cidx,:] = (ycleanpixel + noise_pixel).astype(np.float64)
Global initial guess for all pixels
p0_global = np.array([4.0, 1.5, 1.5, 8.0], dtype=np.float64)
Optional weights (applied to each pixel identically)
weights1d = 1.0 / (0.1**2 + np.finfo(float).eps) # Assuming noisestd = 0.1
Run the pixel-wise fit (this is parallelized)
presults, covresults, chi2results, niterresults, convresults = levenbergmarquardtpixelwise( mymodel, # Your Numba-compiled model function tdata, # Independent variable (1D array, common for all pixels) datacube, # 3D data array (rows x cols x depth) p0global, # Global initial guess (1D array) # Optional parameters for the core LM algorithm, passed to each pixel fit # weights1d=weights1d, maxiter=500, tolg=1e-6, tolp=1e-6, tolc=1e-6, # ... other optional parameters )
print(f"Pixel-wise fitting finished.") print(f"Shape of fitted parameters: {presults.shape}") # (rows x cols x nparams) print(f"Shape of convergence flags: {convresults.shape}") # (rows x cols) print(f"Percentage converged: {np.sum(convresults) / (rows*cols) * 100.0:.2f}%") ```
Tests
The library includes a test suite using pytest to verify the correctness of the core LM algorithm and the pixel-wise function against known solutions (for noiseless data) and against scipy.optimize.least_squares (for noisy data).
To run the tests:
- Ensure you have installed the test dependencies:
pip install .[test] - Navigate to the project root directory in your terminal.
- Run pytest:
sh pytest
Roadmap
- [ ] Add support for analytical Jacobian functions (instead of only finite differences).
- [ ] Implement parameter bounds.
- [ ] Investigate alternative damping strategies (e.g., Nielsen's method).
- [ ] Improve robustness for ill-conditioned problems.
- [ ] Add more detailed documentation and examples.
- [ ] Potentially publish on PyPI.
Contributing
Contributions are welcome! If you have suggestions or find bugs, please open an issue or submit a pull request.
- Fork the Project
- Create your Feature Branch (
git checkout -b feature/AmazingFeature) - Commit your Changes (
git commit -m 'Add some AmazingFeature') - Push to the Branch (
git push origin feature/AmazingFeature) - Open a Pull Request
License
Distributed under the MIT License. See LICENSE for more information.
Contact
Mirza Arnaut - mirza.arnaut@tu-dortmund.de
Project Link: https://github.com/arunoruto/lumafit
Acknowledgments
- The original Levenberg-Marquardt algorithm (see references in
lmba/__init__.py). - Numba for providing the acceleration capabilities.
- NumPy and SciPy for fundamental numerical computing tools.
- pytest for the testing framework.
- othneildrew/Best-README-Template for the README structure template.
Owner
- Name: Mirza Arnaut
- Login: arunoruto
- Kind: user
- Location: Dortmund
- Company: Image Analysis Group TU Dortmund
- Repositories: 1
- Profile: https://github.com/arunoruto
GitHub Events
Total
- Release event: 5
- Push event: 35
- Create event: 5
Last Year
- Release event: 5
- Push event: 35
- Create event: 5
Packages
- Total packages: 1
-
Total downloads:
- pypi 29 last-month
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 5
- Total maintainers: 1
pypi.org: lumafit
A Numba-accelerated Levenberg-Marquardt fitting library
- Documentation: https://lumafit.readthedocs.io/
- License: MIT
-
Latest release: 0.2.3
published 9 months ago
Rankings
Maintainers (1)
Dependencies
- actions/download-artifact v4 composite
- codecov/codecov-action v5 composite
- codecov/test-results-action v1 composite
- deepsourcelabs/test-coverage-action master composite
- DeterminateSystems/flake-checker-action v4 composite
- DeterminateSystems/nix-installer-action main composite
- actions/checkout v4 composite
- DeterminateSystems/flake-checker-action v4 composite
- DeterminateSystems/nix-installer-action main composite
- actions/checkout v4 composite
- stefanzweifel/git-auto-commit-action v5 composite
- actions/checkout v4 composite
- astral-sh/setup-uv v4 composite
- python-semantic-release/publish-action v9.21.0 composite
- python-semantic-release/python-semantic-release v9.21.0 composite
- actions/cache v4 composite
- actions/checkout v4 composite
- actions/upload-artifact v4 composite
- astral-sh/setup-uv v4 composite
- numba >=0.61.0
- numpy >=2.2.5
- scipy >=1.15.3
- colorama 0.4.6
- exceptiongroup 1.3.0
- iniconfig 2.1.0
- llvmlite 0.44.0
- lmba 0.1.0
- numba 0.61.2
- numpy 2.2.6
- packaging 25.0
- pluggy 1.6.0
- pytest 8.3.5
- scipy 1.15.3
- tomli 2.2.1
- typing-extensions 4.13.2
- actions/checkout v4 composite
- actions/deploy-pages v4 composite
- actions/upload-pages-artifact v3 composite
- astral-sh/setup-uv v4 composite