https://github.com/SepKfr/Coarse-and-Fine-Grained-Forecasting-Via-GP-Blurring-Effect
Forecast-blur-denoise forecasting model with PyTorch
https://github.com/SepKfr/Coarse-and-Fine-Grained-Forecasting-Via-GP-Blurring-Effect
Science Score: 20.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
○codemeta.json file
-
○.zenodo.json file
-
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
✓Committers with academic emails
1 of 2 committers (50.0%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.3%) to scientific vocabulary
Keywords
Repository
Forecast-blur-denoise forecasting model with PyTorch
Basic Info
Statistics
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
Forecastblurdenoise Package
Forecastblurdenoise is the PyTorch-based package for the research paper Fine-grained Forecasting Models Via Gaussian Process Blurring Effect.
Methodology: The core methodology involves training the blur model parameters end-to-end with forecasting and denoising components. This unique approach enables the underlying forecasting model to learn coarse-grained patterns, while the denoising forecaster fills in fine-grained details. The results demonstrate significant improvements over state-of-the-art models like Autoformer and Informer.
This package provides:
- The forecast-blur-denoise framework that can integrate any state-of-the-art neural time series forecasting models as the forecaster and denoiser.
- Three options for the blur model: Gaussian Process (GP), scaled isotropic noise, and no noise (perform denoising directly on predictions).
- A data loader module that works with the provided datasets in Google Drive.
- A forecasting model example (Autoformer)
- Hyperparameter tuning with Optuna.
Datasets
In this repository, we have provided the links to google Drive of six pre-processed datasets in the following link: Datasets
Installation
To install run one of the following:
bash
pip install forecastblurdenoise==1.0.7
conda install sepkfr::forecastblurdenoise
Usage Example
Run Script for a toy dataset example
bash
./example_usage --exp_name toy_data
Command Line Args
text
- exp_name (str): Name of the experiment (dataset).
- forecating_model_name (str): Name of the forecasting model.
- n_jobs (int): Total number of jobs for Optuna.
- num_epochs (int): Total number of epochs.
- forecasting_model (nn.Module): The underlying forecasting model.
- train (DataLoader): DataLoader for training data.
- valid (DataLoader): DataLoader for validation data.
- test (DataLoader): DataLoader for test data.
- noise_type (str): Type of noise to be added during denoising ('gp', 'iso', 'no_noise').
- add_noise_only_at_training (bool): Flag indicating whether to add noise only during training.
- src_input_size (int): Size of the source input.
- tgt_input_size (int): Size of the target input.
- tgt_output_size (int): Size of the target output.
- pred_len (int): Length of the prediction horizon.
- num_inducing (int): Number of inducing points for GP regression.
- hyperparameters (dict): Hyperparameters to be optimized.
- args: Command line arguments.
- seed (int): Random seed for reproducibility.
- device: Device on which to run the training.
Run as a Library
```python import torch from torch.utils.data import DataLoader, TensorDataset from torch import nn from forecastblurdenoise.trainforecastblur_denoise import TrainForecastBlurDenoise
defining a simple LSTM model
class LSTM(nn.Module): def init(self, nlayers, hiddensize): super(LSTM, self).init()
self.encoder_lstm = nn.LSTM(hidden_size, hidden_size, n_layers)
self.decoder_lstm = nn.LSTM(hidden_size, hidden_size, n_layers)
self.n_layers = n_layers
self.hidden_size = hidden_size
def forward(self, input_encoder, input_decoder):
enc_outputs, _ = self.encoder_lstm(input_encoder)
dec_outputs, _ = self.encoder_lstm(input_decoder)
return enc_outputs, dec_outputs
Create a toy dataset
def createtimeseriesdata(numsamples, inputsequencelength, outputsequencelength, inputsize, outputsize, device): # input for encoder, input for decoder, and output (ground-truth) return TensorDataset(torch.randn(numsamples, inputsequencelength, inputsize, device=device), torch.randn(numsamples, outputsequencelength, inputsize, device=device), torch.randn(numsamples, outputsequencelength, outputsize, device=device))
setting parameters
numsamplestrain = 32 numsamplesvalid = 8 numsamplestest = 8 inputsequencelength = 96 outputsequencelength = 96 batchsize = 4 inputsize = 5 output_size = 1 cuda = "cuda:0"
device = torch.device(cuda if torch.cuda.is_available() else "cpu")
traindataset = createtimeseriesdata(numsamplestrain, inputsequencelength, outputsequencelength, inputsize, outputsize, device) validdataset = createtimeseriesdata(numsamplesvalid, inputsequencelength, outputsequencelength, inputsize, outputsize, device) testdataset = createtimeseriesdata(numsamplestest, inputsequencelength, outputsequencelength, inputsize, outputsize, device)
trainloader = DataLoader(traindataset, batchsize=batchsize, shuffle=True) validloader = DataLoader(validdataset, batchsize=batchsize, shuffle=False) testloader = DataLoader(testdataset, batchsize=batchsize, shuffle=False)
forecastingmodel = LSTM(nlayers=1, hidden_size=32)
Hyperparameter search space (change accordingly)
hyperparameters = {"dmodel": [16, 32], "nlayers": [1, 2], "lr": [0.01, 0.001]}
Initializing and training the ForecastDenoise model using Optuna
trainforecastdenoise = TrainForecastBlurDenoise(forecastingmodel=forecastingmodel, train=trainloader, valid=validloader, test=testloader, noisetype="gp", numinducing=32, addnoiseonlyattraining=False, inputsize=inputsize, outputsize=outputsize, predlen=96, hyperparameters=hyperparameters, seed=1234, device=device)
train the forecast blur denoise model end-to-end
trainforecastdenoise.train()
evaluate and save MSE, and MAE results in a csv file as reportederrors{exp_name}.csv
trainforecastdenoise.evaluate() ```
Citation
If you are interested in using our forecastblurdenoise model for your forcasting problem, cite our paper as:
bibtex
@misc{koohfar2023finegrained,
title={Fine-grained Forecasting Models Via Gaussian Process Blurring Effect},
author={Sepideh Koohfar and Laura Dietz},
year={2023},
eprint={2312.14280},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Owner
- Name: Sepideh Koohfar
- Login: SepKfr
- Kind: user
- Repositories: 10
- Profile: https://github.com/SepKfr
GitHub Events
Total
Last Year
Committers
Last synced: almost 2 years ago
Top Committers
| Name | Commits | |
|---|---|---|
| sepidekoohfar | s****r@u****u | 72 |
| SepKfr | 1****r | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 0
- Total pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 0
- Total pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- gpytorch >=1.9.0
- numpy >=1.23.5
- optuna >=3.3.0
- pandas >=1.5.2
- python >=3.9
- torch >=2.0.1
- gpytorch >=1.9.0
- numpy >=1.23.5
- optuna >=3.3.0
- pandas >=1.5.2
- torch >=2.0.1