Recent Releases of https://github.com/earthai-tech/fusionlab-learn

https://github.com/earthai-tech/fusionlab-learn -

fusionlab v0.3.1

(released 2025-06-21)

Focus

  • Backend dependency refactor – optional heavyweight packages are now handled centrally, eliminating build / import errors.
  • Subsidence PINN Mini GUI – desktop app for end-to-end forecasting without writing code.

✨ New

  • Subsidence PINN Mini GUI
    ```bash python -m fusionlab.tools.app.miniforecastergui

  • load CSV, tune hyper-parameters, run forecasting pipeline, visualise results.


📈 Improvements

  • Centralised config fusionlab/_configs.py – single source of truth for all optional deps.
  • Config-driven loaders in fusionlab/compat/ now read that config.
  • Smart dummy objects (fusionlab/_dummies.py) auto-generated when a dep is missing.
  • Clean package initialisation – only one KERAS_DEPS / KT_DEPS per sub-package.
  • Decorator split-up

    • new adapt_sklearn_input (reshape helper)
    • new utility concatenate_fusionlab_inputs (inverse operation).

🔄 API changes

  • Internal dependency-import logic completely rewritten (public surface unchanged).
  • _scigofast_set_X_compatadapt_sklearn_input.

🐛 Fixes

  • Circular-import failures (ImportExceptionGroup, ExtensionError) on RTD builds are gone.

🧪 Tests

  • Suite extended to cover real vs. dummy dependency creation.

📚 Docs

  • New how-to: user_guide/pinn_gui_guide (walk-through of the Mini GUI).

👥 Contributors

- Python
Published by earthai-tech 8 months ago

https://github.com/earthai-tech/fusionlab-learn - v0.3.0

v0.3.0 (2025-06-17)

Focus: Advanced PINNs and Flexible Attentive Architectures
A major refactor of the core attentive model, introduction of a next-generation PINN foundation, plus a unified hyperparameter tuner (HydroTuner) for all hydrogeological models.


🚀 New Features

  • BaseAttentive
    A modular encoder–decoder + attention base class with a mode parameter ('tft_like' or 'pihal_like').

  • TransFlowSubsNet (PINN)
    Fully-coupled groundwater flow + aquifer consolidation model.

  • HALNet
    Hybrid Attentive LSTM Network as a standalone forecasting model.

  • PiTGWFlow (PINN)
    Pure-physics solver for 2D transient groundwater flow.

  • HydroTuner & HALTuner
    Model-agnostic hyperparameter tuners for PINNs and HALNet.

  • PINN utilities (prepare_pinn_data_sequences) and new spatial utilities (create_spatial_clusters, batch_spatial_sampling).

  • Time-feature utility: ts_utils.create_time_features.

  • Tuning summary plots automatically generated for all tuners.


✨ Improvements

  • Legacy PIHALNet now inherits from BaseAttentive.
  • New visualization helpers:
    • fusionlab.plot.forecast.plot_forecast_by_step
    • fusionlab.plot.forecast.forecast_view
  • Configurable via architecture_config:
    • Encoder types: 'hybrid' (MultiScaleLSTM) or 'transformer'.
    • Custom decoder_attention_stack.
  • Tuners auto-infer dimensions from data.
  • .run(..., refit_best_model=False) for faster tuning.
  • Custom MLP correction via correction_mlp_config.
  • 30% speed-up in prepare_pinn_data_sequences.

🔄 API Changes

  • BaseAttentive is the new standard base class; uses architecture_config.
  • HydroTuner replaces legacy PiHALTuner (requires model_name_or_cls).
  • search_space replaces param_space (old name triggers FutureWarning).
  • PIHALNet.compile() now accepts a single lambda_pde weight.
  • MultiObjectiveLoss accepts anomaly_scores in its constructor.

🐛 Fixes

  • Refactored PINN internals to fix ValueError/InvalidArgumentError on mismatched shapes.
  • Corrected residual connections logic for use_residuals=False.
  • Switched to sinusoidal positional encoding.
  • Fixed PINN gradient calculations (single GradientTape, zero-weight handling).
  • Edge-case handling in prepare_pinn_data_sequences (single-group series).
  • PINNTunerBase.search now supports tensor y.
  • forecast_view ignores missing years with a warning.
  • HydroTuner.create now correctly handles quantiles=None.

✅ Tests

  • Pytest suites for HALNet, TransFlowSubsNet, PiTGWFlow, PositionalEncoding, HydroTuner, data utilities, spatial utilities, and regression tests for zero-weight PINNs.

📚 Documentation

  • New user guides & gallery pages for HALNet, PINN models, HydroTuner, exercises, “Tips & Tricks”, and detailed docstrings with runnable examples.

👥 Contributors

  • Laurent Kouadio (Lead Developer)

- Python
Published by earthai-tech 8 months ago

https://github.com/earthai-tech/fusionlab-learn -

Version 0.2.3

Release Date: *May 25, 2025***

Focus: Object‑Oriented Hyperparameter Tuning

This release brings a major upgrade to hyperparameter tuning in fusionlab‑learn. A new class‑based forecast_tuner API delivers greater structure, reusability, and flexibility. The legacy function‑based interface remains for backward compatibility, but the new classes are now the recommended path for model optimization.


Enhancements & Improvements

  • New Class‑Based Tuners

    • |New| BaseTuner – internal base class
      (fusionlab.nn._forecast_tuner.BaseTuner) that wraps Keras‑Tuner logic (validation, model building, tuning loop, logging) into an extensible foundation.
    • |New| XTFTTuner – specialized tuner for fusionlab.nn.transformers.XTFT and SuperXTFT, inheriting from BaseTuner.
    • |New| TFTTuner – specialized tuner for strict fusionlab.nn.transformers.TFT and flexible TemporalFusionTransformer (tft_flex) variants.
  • Improved Tuning Workflow

    • |Enhancement| Clear separation of configuration (__init__) and execution (fit), enabling a single tuner instance to fit multiple datasets or task setups (e.g. different forecast_horizon, quantiles).
    • |Enhancement| BaseTuner retains the internal _model_builder_factory for robust default model construction, while still accepting a user‑supplied custom_model_builder.
    • |Enhancement| Smarter input‑tensor handling with automatic dummy tensors when tft_flex requires them.

Code Example – Class‑Based Approach

```python linenums import numpy as np from fusionlab.nn.forecast_tuner import XTFTTuner

1 · Dummy data

B, Tpast, Hout = 8, 12, 6 Ds, Dd, Df = 3, 5, 2 Tfuturetotal = Tpast + Hout Xs = np.random.rand(B, Ds).astype(np.float32) Xd = np.random.rand(B, Tpast, Dd).astype(np.float32) Xf = np.random.rand(B, Tfuturetotal, Df).astype(np.float32) y = np.random.rand(B, Hout, 1).astype(np.float32) traininputs = [Xs, Xd, X_f]

2 · Instantiate the tuner

tuner = XTFTTuner( modelname="xtft", maxtrials=3, # small for demo epochs=2, # small for demo batchsizes=[8], # one batch size tunerdir="./xtftclasstuning_v023", verbose=0 # silence Keras‑Tuner logs )

3 · Run tuning

print("Starting XTFT tuning with new class‑based approach...") besthps, bestmodel, _ = tuner.fit( inputs=traininputs, y=y, forecasthorizon=H_out )

4 · Inspect results

if besthps: print("Tuning successful!") print(f"Best Batch Size: {besthps.get('batchsize')}") print(f"Best Learning Rate:{besthps.get('learning_rate')}") else: print("Tuning did not find a best model.")

- Python
Published by earthai-tech 9 months ago

https://github.com/earthai-tech/fusionlab-learn -

Version 0.2.2

Release Date: *May 24, 2025***

Focus: Usability Enhancements, Minor Fixes, and Documentation Polish

This patch builds on the utility standardization introduced in v0.2.1, bringing further usability improvements to plotting functions, addressing minor bugs, and refining the documentation for clarity and completeness.


Enhancements & Improvements

  • |Enhancement| plot_forecasts (fusionlab.plot.forecast.plot_forecasts, an alias for visualize_forecasts)

    • Added figsize_per_subplot for direct control of individual subplot sizes when kind="temporal" with multiple samples or output dimensions. The overall figure size is now computed dynamically.
    • More informative auto‑generated subplot titles, especially for multi‑output models.
    • More flexible handling of actual_data for comparisons to external ground truth in temporal plots.
  • |Enhancement| plot_metric_over_horizon (fusionlab.plot.evaluation.plot_metric_over_horizon)

    • Gracefully skips metric points that cannot be computed (e.g. all‑NaNs, division by zero) and issues a warning instead of raising an error.
  • |Enhancement| plot_metric_radar (fusionlab.plot.evaluation.plot_metric_radar)

    • Improved y‑axis tick formatting for easier reading of metric values.
    • New max_segments_to_plot parameter prevents overly cluttered radar charts (warning emitted if segments exceed the limit).
  • |Enhancement| Minor performance gains in fusionlab.nn.utils.format_predictions_to_dataframe for very large prediction arrays.

  • |Enhancement| Clearer error messages from fusionlab.nn._tensor_validation.validate_model_inputs when model_name="tft_flex" receives an unexpected number of inputs (soft ‑mode validation helper).


Fixes

  • |Fix| reshape_xtft_data – ensures spatial_cols with non‑string identifiers are handled consistently during grouping to avoid mis‑grouping issues.

  • |Fix| plot_forecasts – now respects spatial_cols if the forecast_df uses non‑default coordinate names.

  • |Fix| XTFT – guarantees anomaly_scores is reset when anomaly_detection_strategy changes without recompilation.

  • |Fix| plot_metric_over_horizon – prevents potential KeyError when using custom metrics with output_dim > 1 and missing aggregation logic.


Tests

  • |Tests| Expanded pytest coverage for fusionlab.plot.evaluation functions (plot_forecasts, plot_metric_over_horizon, plot_metric_radar) including edge cases such as empty DataFrames and missing optional columns.

  • |Tests| Added tests verifying correct verbose logging behavior across utility functions.


Documentation

  • |Docs| New User Guide page
    /user_guide/evaluation/evaluation_plotting – showcases plot_forecast_comparison (renamed from plot_forecasts in v0.2.1), plot_metric_over_horizon, and plot_metric_radar.

  • |Docs| Reorganized user_guide/index.rst for a clearer structure with new “Utilities” and “Evaluation & Visualization” sections.

  • |Docs| Restructured “Examples Gallery” (gallery/index.rst) to include a dedicated “Exercises” section (exercises/index.rst) and converted several examples into guided exercises: anomaly_detection_exercise.rst, exercise_advanced_xtft.rst, exercise_basic_forecasting.rst, exercise_tft_required.rst.

  • |Docs| Added forecasting_workflow_utils guide (/user_guide/utils/forecasting_workflow_utils) illustrating combined use of prepare_model_inputs, format_predictions_to_dataframe, and plot_forecasts.

  • |Docs| Clarified parameter names in format_predictions_to_dataframe and plot_forecasts (e.g. model_inputs vs inputs, y_true_sequences vs y).

  • |Docs| New guide /user_guide/visualizing_with_kdiagram – shows how to integrate fusionlab‑learn outputs with the k‑diagram library for polar visualizations.

  • |Docs| Updated installation.rst with instructions for installing optional dependencies via extras (pip install fusionlab-learn[kdiagram]).


Contributors

- Python
Published by earthai-tech 9 months ago

https://github.com/earthai-tech/fusionlab-learn - v0.2.0

Version 0.2.0

(Release Date: May 20, 2025)

Focus: Major Input Validation Overhaul, Enhanced Tuner, New Dataset Utilities, and API Refinements

This release introduces significant improvements to input validation robustness across all models, particularly for TensorFlow graph execution. The hyperparameter tuning framework has been substantially enhanced for better model compatibility and parameter handling. New dataset loading and generation utilities have been added. Key API refinements include the renaming of NTemporalFusionTransformer to DummyTFT for clarity.

New Features

  • Feature Added load_processed_subsidence_data utility. This function provides a comprehensive pipeline for loading raw Zhongshan or Nansha datasets, applying a predefined preprocessing workflow (feature selection, NaN handling, encoding, scaling), and optionally reshaping data into sequences for TFT/XTFT models. Includes caching for processed data and sequences.
  • Feature Introduced n_samples and random_state parameters to fetch_zhongshan_data and fetch_nansha_data to allow loading the full sampled dataset or requesting a smaller, spatially stratified sub-sample.
  • Feature Added new synthetic data generators to fusionlab.datasets.make:
    • make_trend_seasonal_data: Generates univariate series with configurable trend and multiple seasonal components.
    • make_multivariate_target_data: Generates multi-series data with static/dynamic/future features and multiple, potentially interdependent, target variables.

API Changes & Enhancements

  • API Change Renamed NTemporalFusionTransformer to DummyTFT to better reflect its role as a simplified TFT variant (static and dynamic inputs only, primarily for point forecasts). The future_input_dim parameter is now accepted in DummyTFT.__init__ for API consistency but is internally ignored and a warning is issued.
  • Enhancement Major refactoring of input validation with the introduction of validate_model_inputs. This function provides:
    • Robust graph-compatible checks for tensor ranks, feature dimensions, batch sizes, and time dimension consistency using TensorFlow operations.
    • A mode parameter ('strict' or 'soft') to control validation depth.
    • Specialized internal helper (_validate_tft_flexible_inputs_soft_mode) to intelligently infer input roles for the flexible TemporalFusionTransformer (when model_name='tft_flex' and mode='soft').
    • Consistent return order of (static, dynamic, future) processed tensors, requiring updates in model call methods that use it.
  • Enhancement Improved forecast_tuner (xtft_tuner and its internal _model_builder_factory):
    • Correctly handles model_name options: "xtft", "superxtft", "tft" (stricter), and "tft_flex" (flexible TemporalFusionTransformer).
    • Ensures appropriate input validation path is chosen based on model_name before calling validate_model_inputs.
    • Passes only relevant parameters to model constructors, especially for the flexible TemporalFusionTransformer.
    • Correctly derives and passes input dimensions to the model builder, respecting None for optional inputs in tft_flex.
    • Robustly handles boolean hyperparameters (e.g., use_batch_norm, use_residuals) and list-like hyperparameters (e.g., scales) for Keras Tuner, ensuring correct type casting before model instantiation.
  • Enhancement Refined XTFT.call and SuperXTFT.call to use the align_temporal_dimensions helper. This ensures correct time alignment of inputs before they are passed to components like MultiModalEmbedding and HierarchicalAttention.
  • Enhancement Removed redundant concatenation of embeddings_with_pos in the final feature fusion stage of XTFT.call.
  • Enhancement Refined DummyTFT:
    • call: Now correctly uses validate_model_inputs for its two-input (static, dynamic) signature by passing appropriate parameters for future_covariate_dim (None) and model_name. Output layer logic for quantiles with output_dim > 1 now correctly stacks to (B, H, Q, O).
    • get_config: Includes _future_input_dim_config (what user passed) and output_dim.
  • Enhancement Made get_versions more resilient by attempting to import importlib_metadata as a fallback if importlib.metadata (Python 3.8+) is not found.

Fixes

  • Fix Resolved AttributeError: 'Tensor' object has no attribute 'numpy' in input validation functions by replacing Python boolean conversions of symbolic tensors with TensorFlow graph-compatible assertions (e.g., tf.debugging.assert_equal).
  • Fix Addressed InvalidArgumentError: Static input must be 2D. Got rank X and similar rank/dimension mismatch errors in validate_model_inputs by using tf.rank and tf.shape consistently with tf.debugging.assert_equal.
  • Fix Corrected ValueError: Dimension 1 in both shapes must be equal... in MultiModalEmbedding and InvalidArgumentError: Incompatible shapes... [Op:AddV2] in HierarchicalAttention by ensuring time-aligned inputs are passed from model call methods (using align_temporal_dimensions).
  • Fix Fixed TypeError: A Choice can contain only int, float, str, or bool... and InvalidParameterError: ...must be an instance of 'bool'. Got 0/1... in _model_builder_factory of forecast_tuner.py. Boolean hyperparameters are now defined using hp.Choice with [True, False] values, and scales are handled using string options mapped to actual values. Explicit casting to bool is applied before model instantiation.

Tests

  • Tests Added comprehensive pytest suite for the revised validate_model_inputs covering different modes, input combinations, and error conditions.
  • Tests Updated pytest suite for forecast_tuner to test various model_name options and ensure correct parameter handling.
  • Tests Added pytest suite for DummyTFT.
  • Tests Updated pytest suite for reshape_xtft_data to fix minor issues and ensure save functionality with tmp_path.

Documentation

  • Docs Updated User Guide for fusionlab.datasets to include documentation for load_processed_subsidence_data and new data generation functions in make.py.
  • Docs Revised User Guide for fusionlab.nn.forecast_tuner with step-by-step examples.
  • Docs Updated API reference in api.rst to include new dataset functions.
  • Docs Corrected license information in license.rst to BSD-3-Clause.
  • Docs Updated README.md for Code Ocean capsule to emphasize Python version requirements and clarify data usage.

- Python
Published by earthai-tech 9 months ago

https://github.com/earthai-tech/fusionlab-learn -

Release v0.1.1 (April 25, 2025)

This patch focuses on critical bug fixes and improved stability around graph execution and custom layer interactions in FusionLab.

🐛 Bug Fixes & Stability

  • GatedResidualNetwork & other components
    Converted activation strings to callables via tf.keras.activations.get() to eliminate TypeError: 'str' object is not callable.
  • GRN context broadcasting
    Added robust broadcasting logic (using tf.cond, tf.rank, tf.expand_dims) and removed problematic @tf.autograph.experimental.do_not_convert decorator to fix ValueError: Incompatible shapes.
  • GRN build method
    Avoided iterating over dynamic TensorShape, preventing ValueError: Cannot iterate over a shape with unknown rank.
  • VariableSelectionNetwork (VSN)
    Switched from Python loops & slicing to tf.unstack/tf.stack (or retained decorator-based loop fix) to resolve TypeError: list indices must be integers or slices, not SymbolicTensor.
  • Dense layer input shape
    Ensured internal GRNs are built with known shapes ahead of time to fix ValueError: The last dimension of the inputs to a Dense layer should be defined.
  • TFT TimeDistributed output
    Corrected 3D tensor slicing for quantile outputs, addressing ValueError: TimeDistributed Layer should be passed an input_shape with at least 3 dimensions.
  • Cleanup
    Removed unused use_time_distributed parameter from GatedResidualNetwork.__init__ and get_config.

✅ Tests Added/Updated

  • Component tests for GatedResidualNetwork, VariableSelectionNetwork, TemporalAttentionLayer, TFT, and XTFT covering context handling, modes, training, and serialization.
  • Dataset tests for fusionlab.datasets.make functions.

🎉 Contributors

  • earthai-tech

- Python
Published by earthai-tech 10 months ago