Recent Releases of https://github.com/earthai-tech/fusionlab-learn
https://github.com/earthai-tech/fusionlab-learn -
fusionlab v0.3.1
(released 2025-06-21)
Focus
- Backend dependency refactor – optional heavyweight packages are now handled centrally, eliminating build / import errors.
- Subsidence PINN Mini GUI – desktop app for end-to-end forecasting without writing code.
✨ New
Subsidence PINN Mini GUI
```bash python -m fusionlab.tools.app.miniforecasterguiload CSV, tune hyper-parameters, run forecasting pipeline, visualise results.
📈 Improvements
- Centralised config
fusionlab/_configs.py– single source of truth for all optional deps. - Config-driven loaders in
fusionlab/compat/now read that config. - Smart dummy objects (
fusionlab/_dummies.py) auto-generated when a dep is missing. - Clean package initialisation – only one
KERAS_DEPS/KT_DEPSper sub-package. Decorator split-up
- new
adapt_sklearn_input(reshape helper) - new utility
concatenate_fusionlab_inputs(inverse operation).
- new
🔄 API changes
- Internal dependency-import logic completely rewritten (public surface unchanged).
_scigofast_set_X_compat⇒adapt_sklearn_input.
🐛 Fixes
- Circular-import failures (
ImportExceptionGroup,ExtensionError) on RTD builds are gone.
🧪 Tests
- Suite extended to cover real vs. dummy dependency creation.
📚 Docs
- New how-to: user_guide/pinn_gui_guide (walk-through of the Mini GUI).
👥 Contributors
- Laurent Kouadio – Lead developer https://earthai-tech.github.io/
- Python
Published by earthai-tech 8 months ago
https://github.com/earthai-tech/fusionlab-learn - v0.3.0
v0.3.0 (2025-06-17)
Focus: Advanced PINNs and Flexible Attentive Architectures
A major refactor of the core attentive model, introduction of a next-generation PINN foundation, plus a unified hyperparameter tuner (HydroTuner) for all hydrogeological models.
🚀 New Features
BaseAttentive
A modular encoder–decoder + attention base class with amodeparameter ('tft_like'or'pihal_like').TransFlowSubsNet(PINN)
Fully-coupled groundwater flow + aquifer consolidation model.HALNet
Hybrid Attentive LSTM Network as a standalone forecasting model.PiTGWFlow(PINN)
Pure-physics solver for 2D transient groundwater flow.HydroTuner&HALTuner
Model-agnostic hyperparameter tuners for PINNs and HALNet.PINN utilities (
prepare_pinn_data_sequences) and new spatial utilities (create_spatial_clusters,batch_spatial_sampling).Time-feature utility:
ts_utils.create_time_features.Tuning summary plots automatically generated for all tuners.
✨ Improvements
- Legacy
PIHALNetnow inherits fromBaseAttentive. - New visualization helpers:
fusionlab.plot.forecast.plot_forecast_by_stepfusionlab.plot.forecast.forecast_view
- Configurable via
architecture_config:
- Encoder types:
'hybrid'(MultiScaleLSTM) or'transformer'. - Custom
decoder_attention_stack.
- Encoder types:
- Tuners auto-infer dimensions from data.
.run(..., refit_best_model=False)for faster tuning.- Custom MLP correction via
correction_mlp_config. - 30% speed-up in
prepare_pinn_data_sequences.
🔄 API Changes
BaseAttentiveis the new standard base class; usesarchitecture_config.HydroTunerreplaces legacyPiHALTuner(requiresmodel_name_or_cls).search_spacereplacesparam_space(old name triggersFutureWarning).PIHALNet.compile()now accepts a singlelambda_pdeweight.MultiObjectiveLossacceptsanomaly_scoresin its constructor.
🐛 Fixes
- Refactored PINN internals to fix
ValueError/InvalidArgumentErroron mismatched shapes. - Corrected residual connections logic for
use_residuals=False. - Switched to sinusoidal positional encoding.
- Fixed PINN gradient calculations (single
GradientTape, zero-weight handling). - Edge-case handling in
prepare_pinn_data_sequences(single-group series). PINNTunerBase.searchnow supports tensory.forecast_viewignores missing years with a warning.HydroTuner.createnow correctly handlesquantiles=None.
✅ Tests
- Pytest suites for
HALNet,TransFlowSubsNet,PiTGWFlow,PositionalEncoding,HydroTuner, data utilities, spatial utilities, and regression tests for zero-weight PINNs.
📚 Documentation
- New user guides & gallery pages for
HALNet, PINN models,HydroTuner, exercises, “Tips & Tricks”, and detailed docstrings with runnable examples.
👥 Contributors
- Laurent Kouadio (Lead Developer)
- Python
Published by earthai-tech 8 months ago
https://github.com/earthai-tech/fusionlab-learn -
Version 0.2.3
Release Date: *May 25, 2025***
Focus: Object‑Oriented Hyperparameter Tuning
This release brings a major upgrade to hyperparameter tuning in
fusionlab‑learn. A new class‑based forecast_tuner API
delivers greater structure, reusability, and flexibility. The legacy
function‑based interface remains for backward compatibility, but the new
classes are now the recommended path for model optimization.
Enhancements & Improvements
New Class‑Based Tuners
- |New|
BaseTuner– internal base class
(fusionlab.nn._forecast_tuner.BaseTuner) that wraps Keras‑Tuner logic (validation, model building, tuning loop, logging) into an extensible foundation. - |New|
XTFTTuner– specialized tuner forfusionlab.nn.transformers.XTFTandSuperXTFT, inheriting fromBaseTuner. - |New|
TFTTuner– specialized tuner for strictfusionlab.nn.transformers.TFTand flexibleTemporalFusionTransformer(tft_flex) variants.
- |New|
Improved Tuning Workflow
- |Enhancement| Clear separation of configuration (
__init__) and execution (fit), enabling a single tuner instance to fit multiple datasets or task setups (e.g. differentforecast_horizon,quantiles). - |Enhancement|
BaseTunerretains the internal_model_builder_factoryfor robust default model construction, while still accepting a user‑suppliedcustom_model_builder. - |Enhancement| Smarter input‑tensor handling with automatic dummy
tensors when
tft_flexrequires them.
- |Enhancement| Clear separation of configuration (
Code Example – Class‑Based Approach
```python linenums import numpy as np from fusionlab.nn.forecast_tuner import XTFTTuner
1 · Dummy data
B, Tpast, Hout = 8, 12, 6 Ds, Dd, Df = 3, 5, 2 Tfuturetotal = Tpast + Hout Xs = np.random.rand(B, Ds).astype(np.float32) Xd = np.random.rand(B, Tpast, Dd).astype(np.float32) Xf = np.random.rand(B, Tfuturetotal, Df).astype(np.float32) y = np.random.rand(B, Hout, 1).astype(np.float32) traininputs = [Xs, Xd, X_f]
2 · Instantiate the tuner
tuner = XTFTTuner( modelname="xtft", maxtrials=3, # small for demo epochs=2, # small for demo batchsizes=[8], # one batch size tunerdir="./xtftclasstuning_v023", verbose=0 # silence Keras‑Tuner logs )
3 · Run tuning
print("Starting XTFT tuning with new class‑based approach...") besthps, bestmodel, _ = tuner.fit( inputs=traininputs, y=y, forecasthorizon=H_out )
4 · Inspect results
if besthps: print("Tuning successful!") print(f"Best Batch Size: {besthps.get('batchsize')}") print(f"Best Learning Rate:{besthps.get('learning_rate')}") else: print("Tuning did not find a best model.")
- Python
Published by earthai-tech 9 months ago
https://github.com/earthai-tech/fusionlab-learn -
Version 0.2.2
Release Date: *May 24, 2025***
Focus: Usability Enhancements, Minor Fixes, and Documentation Polish
This patch builds on the utility standardization introduced in v0.2.1, bringing further usability improvements to plotting functions, addressing minor bugs, and refining the documentation for clarity and completeness.
Enhancements & Improvements
|Enhancement|
plot_forecasts(fusionlab.plot.forecast.plot_forecasts, an alias forvisualize_forecasts)- Added
figsize_per_subplotfor direct control of individual subplot sizes whenkind="temporal"with multiple samples or output dimensions. The overall figure size is now computed dynamically. - More informative auto‑generated subplot titles, especially for multi‑output models.
- More flexible handling of
actual_datafor comparisons to external ground truth in temporal plots.
- Added
|Enhancement|
plot_metric_over_horizon(fusionlab.plot.evaluation.plot_metric_over_horizon)- Gracefully skips metric points that cannot be computed (e.g. all‑NaNs, division by zero) and issues a warning instead of raising an error.
|Enhancement|
plot_metric_radar(fusionlab.plot.evaluation.plot_metric_radar)- Improved y‑axis tick formatting for easier reading of metric values.
- New
max_segments_to_plotparameter prevents overly cluttered radar charts (warning emitted if segments exceed the limit).
|Enhancement| Minor performance gains in
fusionlab.nn.utils.format_predictions_to_dataframefor very large prediction arrays.|Enhancement| Clearer error messages from
fusionlab.nn._tensor_validation.validate_model_inputswhenmodel_name="tft_flex"receives an unexpected number of inputs (soft ‑mode validation helper).
Fixes
|Fix|
reshape_xtft_data– ensuresspatial_colswith non‑string identifiers are handled consistently during grouping to avoid mis‑grouping issues.|Fix|
plot_forecasts– now respectsspatial_colsif theforecast_dfuses non‑default coordinate names.|Fix|
XTFT– guaranteesanomaly_scoresis reset whenanomaly_detection_strategychanges without recompilation.|Fix|
plot_metric_over_horizon– prevents potentialKeyErrorwhen using custom metrics withoutput_dim > 1and missing aggregation logic.
Tests
|Tests| Expanded pytest coverage for
fusionlab.plot.evaluationfunctions (plot_forecasts,plot_metric_over_horizon,plot_metric_radar) including edge cases such as empty DataFrames and missing optional columns.|Tests| Added tests verifying correct
verboselogging behavior across utility functions.
Documentation
|Docs| New User Guide page
/user_guide/evaluation/evaluation_plotting– showcasesplot_forecast_comparison(renamed fromplot_forecastsin v0.2.1),plot_metric_over_horizon, andplot_metric_radar.|Docs| Reorganized
user_guide/index.rstfor a clearer structure with new “Utilities” and “Evaluation & Visualization” sections.|Docs| Restructured “Examples Gallery” (
gallery/index.rst) to include a dedicated “Exercises” section (exercises/index.rst) and converted several examples into guided exercises:anomaly_detection_exercise.rst,exercise_advanced_xtft.rst,exercise_basic_forecasting.rst,exercise_tft_required.rst.|Docs| Added
forecasting_workflow_utilsguide (/user_guide/utils/forecasting_workflow_utils) illustrating combined use ofprepare_model_inputs,format_predictions_to_dataframe, andplot_forecasts.|Docs| Clarified parameter names in
format_predictions_to_dataframeandplot_forecasts(e.g.model_inputsvsinputs,y_true_sequencesvsy).|Docs| New guide
/user_guide/visualizing_with_kdiagram– shows how to integratefusionlab‑learnoutputs with the k‑diagram library for polar visualizations.|Docs| Updated
installation.rstwith instructions for installing optional dependencies via extras (pip install fusionlab-learn[kdiagram]).
Contributors
- Laurent Kouadio – Lead Developer
- Python
Published by earthai-tech 9 months ago
https://github.com/earthai-tech/fusionlab-learn - v0.2.0
Version 0.2.0
(Release Date: May 20, 2025)
Focus: Major Input Validation Overhaul, Enhanced Tuner, New Dataset Utilities, and API Refinements
This release introduces significant improvements to input validation robustness across all models, particularly for TensorFlow graph execution. The hyperparameter tuning framework has been substantially enhanced for better model compatibility and parameter handling. New dataset loading and generation utilities have been added. Key API refinements include the renaming of NTemporalFusionTransformer to DummyTFT for clarity.
New Features
- Feature Added
load_processed_subsidence_datautility. This function provides a comprehensive pipeline for loading raw Zhongshan or Nansha datasets, applying a predefined preprocessing workflow (feature selection, NaN handling, encoding, scaling), and optionally reshaping data into sequences for TFT/XTFT models. Includes caching for processed data and sequences. - Feature Introduced
n_samplesandrandom_stateparameters tofetch_zhongshan_dataandfetch_nansha_datato allow loading the full sampled dataset or requesting a smaller, spatially stratified sub-sample. - Feature Added new synthetic data generators to
fusionlab.datasets.make:make_trend_seasonal_data: Generates univariate series with configurable trend and multiple seasonal components.make_multivariate_target_data: Generates multi-series data with static/dynamic/future features and multiple, potentially interdependent, target variables.
API Changes & Enhancements
- API Change Renamed
NTemporalFusionTransformertoDummyTFTto better reflect its role as a simplified TFT variant (static and dynamic inputs only, primarily for point forecasts). Thefuture_input_dimparameter is now accepted inDummyTFT.__init__for API consistency but is internally ignored and a warning is issued. - Enhancement Major refactoring of input validation with the introduction of
validate_model_inputs. This function provides:- Robust graph-compatible checks for tensor ranks, feature dimensions, batch sizes, and time dimension consistency using TensorFlow operations.
- A
modeparameter ('strict'or'soft') to control validation depth. - Specialized internal helper (
_validate_tft_flexible_inputs_soft_mode) to intelligently infer input roles for the flexibleTemporalFusionTransformer(whenmodel_name='tft_flex'andmode='soft'). - Consistent return order of
(static, dynamic, future)processed tensors, requiring updates in modelcallmethods that use it.
- Enhancement Improved
forecast_tuner(xtft_tunerand its internal_model_builder_factory):- Correctly handles
model_nameoptions:"xtft","superxtft","tft"(stricter), and"tft_flex"(flexibleTemporalFusionTransformer). - Ensures appropriate input validation path is chosen based on
model_namebefore callingvalidate_model_inputs. - Passes only relevant parameters to model constructors, especially for the flexible
TemporalFusionTransformer. - Correctly derives and passes input dimensions to the model builder, respecting
Nonefor optional inputs intft_flex. - Robustly handles boolean hyperparameters (e.g.,
use_batch_norm,use_residuals) and list-like hyperparameters (e.g.,scales) for Keras Tuner, ensuring correct type casting before model instantiation.
- Correctly handles
- Enhancement Refined
XTFT.callandSuperXTFT.callto use thealign_temporal_dimensionshelper. This ensures correct time alignment of inputs before they are passed to components likeMultiModalEmbeddingandHierarchicalAttention. - Enhancement Removed redundant concatenation of
embeddings_with_posin the final feature fusion stage ofXTFT.call. - Enhancement Refined
DummyTFT:call: Now correctly usesvalidate_model_inputsfor its two-input (static, dynamic) signature by passing appropriate parameters forfuture_covariate_dim(None) andmodel_name. Output layer logic for quantiles withoutput_dim > 1now correctly stacks to(B, H, Q, O).get_config: Includes_future_input_dim_config(what user passed) andoutput_dim.
- Enhancement Made
get_versionsmore resilient by attempting to importimportlib_metadataas a fallback ifimportlib.metadata(Python 3.8+) is not found.
Fixes
- Fix Resolved
AttributeError: 'Tensor' object has no attribute 'numpy'in input validation functions by replacing Python boolean conversions of symbolic tensors with TensorFlow graph-compatible assertions (e.g.,tf.debugging.assert_equal). - Fix Addressed
InvalidArgumentError: Static input must be 2D. Got rank Xand similar rank/dimension mismatch errors invalidate_model_inputsby usingtf.rankandtf.shapeconsistently withtf.debugging.assert_equal. - Fix Corrected
ValueError: Dimension 1 in both shapes must be equal...inMultiModalEmbeddingandInvalidArgumentError: Incompatible shapes... [Op:AddV2]inHierarchicalAttentionby ensuring time-aligned inputs are passed from modelcallmethods (usingalign_temporal_dimensions). - Fix Fixed
TypeError: A Choice can contain only int, float, str, or bool...andInvalidParameterError: ...must be an instance of 'bool'. Got 0/1...in_model_builder_factoryofforecast_tuner.py. Boolean hyperparameters are now defined usinghp.Choicewith[True, False]values, andscalesare handled using string options mapped to actual values. Explicit casting toboolis applied before model instantiation.
Tests
- Tests Added comprehensive pytest suite for the revised
validate_model_inputscovering different modes, input combinations, and error conditions. - Tests Updated pytest suite for
forecast_tunerto test variousmodel_nameoptions and ensure correct parameter handling. - Tests Added pytest suite for
DummyTFT. - Tests Updated pytest suite for
reshape_xtft_datato fix minor issues and ensure save functionality withtmp_path.
Documentation
- Docs Updated User Guide for
fusionlab.datasetsto include documentation forload_processed_subsidence_dataand new data generation functions inmake.py. - Docs Revised User Guide for
fusionlab.nn.forecast_tunerwith step-by-step examples. - Docs Updated API reference in
api.rstto include new dataset functions. - Docs Corrected license information in
license.rstto BSD-3-Clause. - Docs Updated
README.mdfor Code Ocean capsule to emphasize Python version requirements and clarify data usage.
- Python
Published by earthai-tech 9 months ago
https://github.com/earthai-tech/fusionlab-learn -
Release v0.1.1 (April 25, 2025)
This patch focuses on critical bug fixes and improved stability around graph execution and custom layer interactions in FusionLab.
🐛 Bug Fixes & Stability
- GatedResidualNetwork & other components
Converted activation strings to callables viatf.keras.activations.get()to eliminateTypeError: 'str' object is not callable. - GRN context broadcasting
Added robust broadcasting logic (usingtf.cond,tf.rank,tf.expand_dims) and removed problematic@tf.autograph.experimental.do_not_convertdecorator to fixValueError: Incompatible shapes. - GRN build method
Avoided iterating over dynamicTensorShape, preventingValueError: Cannot iterate over a shape with unknown rank. - VariableSelectionNetwork (VSN)
Switched from Python loops & slicing totf.unstack/tf.stack(or retained decorator-based loop fix) to resolveTypeError: list indices must be integers or slices, not SymbolicTensor. - Dense layer input shape
Ensured internal GRNs are built with known shapes ahead of time to fixValueError: The last dimension of the inputs to a Dense layer should be defined. - TFT TimeDistributed output
Corrected 3D tensor slicing for quantile outputs, addressingValueError: TimeDistributed Layer should be passed an input_shape with at least 3 dimensions. - Cleanup
Removed unuseduse_time_distributedparameter fromGatedResidualNetwork.__init__andget_config.
✅ Tests Added/Updated
- Component tests for
GatedResidualNetwork,VariableSelectionNetwork,TemporalAttentionLayer,TFT, andXTFTcovering context handling, modes, training, and serialization. - Dataset tests for
fusionlab.datasets.makefunctions.
🎉 Contributors
- earthai-tech
- Python
Published by earthai-tech 10 months ago