ml-optimized-orthogonal-basis-1d-pp
Experimental Python code developed for research on: H. Waclawek and S. Huber, “Machine Learning Optimized Orthogonal Basis Piecewise Polynomial Approximation,” in Learning and Intelligent Optimization, Cham: Springer Nature Switzerland, 2025, pp. 427–441.
https://github.com/jrc-isia/ml-optimized-orthogonal-basis-1d-pp
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
○.zenodo.json file
-
✓DOI references
Found 6 DOI reference(s) in README -
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (9.7%) to scientific vocabulary
Keywords
Repository
Experimental Python code developed for research on: H. Waclawek and S. Huber, “Machine Learning Optimized Orthogonal Basis Piecewise Polynomial Approximation,” in Learning and Intelligent Optimization, Cham: Springer Nature Switzerland, 2025, pp. 427–441.
Basic Info
- Host: GitHub
- Owner: JRC-ISIA
- Language: Python
- Default Branch: main
- Homepage: https://doi.org/10.1007/978-3-031-75623-8_33
- Size: 6.65 MB
Statistics
- Stars: 1
- Watchers: 3
- Forks: 0
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
readme.md
Machine Learning Optimized Univariate Piecewise Polynomial Approximation for Use in Cam Approximation
Experimental Python code developed for research on:
H. Waclawek and S. Huber, “Machine Learning Optimized Orthogonal Basis Piecewise Polynomial Approximation,” in Learning and Intelligent Op- timization, Cham: Springer Nature Switzerland, 2025, pp. 427–441. DOI: 10. 1007/978-3-031-75623-833_
See citation.bib for details on citation.
This project is licensed under the terms of the MIT license.
<!-- - https://doi.org/10.1007/978-3-031-75623-833 - https://doi.org/10.48550/arXiv.2403.08579 - https://doi.org/10.1007/978-3-031-25312-668 -->
Allows fitting of univariate piecewise polynomials of arbitrary degree.
$C^k$-continuity requires degree $2k+1$.
Supports $2$ Bases:
- Power Basis (non-orthogonal)
- Chebyshev Basis (orthogonal)

Usage
Basics
- Fitting using 'model' module:
```py import model import numpy as np from tensorflow import keras
x = np.linspace(0, 2*np.pi, 50) y = np.sin(x)
pp = model.PP(polydegree=7, polynum=3, ck=3, basis='chebyshev') opt = keras.optimizers.Adam(amsgrad=True, learning_rate=0.1)
alpha = 0.01 # optimization: how much emphasis do we want to put on continuity? pp.fit(x, y, optimizer=opt, nepochs=600, factorapproximationquality=1-alpha, factorckpressure=alpha, earlystopping=True, patience=100) ```
Note: x-data will be rescaled so that every polynomial segment spans a range of $2$.
We can evaluate generated PPs for specific x-ranges or individual x-values:
```py import matplotlib.pyplot as plt
xrange = np.linspace(2, 4, 50) y = pp.evaluateppatx(xrange, deriv=0) plt.plot(xrange,y) ```
- Plotting using 'plot' module:
```py import plot
plot.plotpp(pp) plot.plotloss(pp) ```
We can plot specific derivatives or losses of specific optimization targets:
py
plot.plot_pp(pp, deriv=1)
plot.plot_loss(pp, type='continuity-total')
plot.plot_loss(pp, type='continuity-derivatives')
plot.plot_loss(pp, type='approximation')
- Initialization from coefficients
py
pp_new = model.get_pp_from_coeffs(pp.coeffs, x, y, basis='chebyshev', ck=3)
plot.plot_pp(pp_new)
Useful stuff
- Parallel execution
The 'parallel' module offers functions for parallel experiment execution.
All return values are pickleable for execution on Windows.
E.g. comparing performance of different optimizers:
```py import parallel import multiprocessing as mp from itertools import repeat
optimizers = ['sgd', 'sgd-momentum', 'sgd-momentum-nesterov', 'adagrad', 'adadelta', 'rmsprop', 'adam', 'adamax', 'nadam', 'adam-amsgrad', 'adafactor', 'adamw', 'ftrl', 'lion']
kwargs = {'datax': x, 'datay': y, 'polynum': 3, 'ck': 3, 'degree': 7, 'nepochs': 600, 'learningrate': 0.01 , 'mode': 'optimizers', 'factorapproximationquality': 1-alpha, 'factorckpressure': alpha, 'basis': 'chebyshev'}
pool = mp.Pool(mp.cpu_count()) results = pool.starmap(parallel.job, zip(optimizers, repeat(kwargs)))
losses = []
for i in range(len(results)): losses.append(results[i][1])
fig, axes = plt.subplots(4, (len(optimizers)+2)//4) axes = axes.flatten() fig.setfigwidth(len(optimizers)*3) fig.setfigheight(20) fig.suptitle(f'Losses over epochs with different optimizers')
for i, opt in enumerate(optimizers): ax = axes[i] ax.set_title("%s" % opt) ax.semilogy(losses[i]) ```
- Creating animations
```py import animate
animate.createanimation(filepath='ppanimation', pp=pp, basis='chebyshev', shiftpolynomialcenters='mean', plot_loss=True) ```
Owner
- Name: JRC-ISIA
- Login: JRC-ISIA
- Kind: organization
- Repositories: 1
- Profile: https://github.com/JRC-ISIA
Citation (citation.bib)
@inproceedings{WH24,
keywords = {cdg},
title = {{Machine Learning Optimized Orthogonal Basis Piecewise Polynomial Approximation}},
author = {Waclawek, Hannes and Huber, Stefan},
year = {2025},
booktitle = {{Learning and Intelligent Optimization}},
publisher = {{Springer Nature Switzerland}},
address = {Cham},
pages = {427-441},
isbn = {978-3-031-75623-8},
doi = {10.1007/978-3-031-75623-8_33}
}
GitHub Events
Total
- Watch event: 1
- Member event: 1
- Push event: 1
- Create event: 2
Last Year
- Watch event: 1
- Member event: 1
- Push event: 1
- Create event: 2