https://github.com/amazon-science/unconditional-time-series-diffusion

Official PyTorch implementation of TSDiff models presented in the NeurIPS 2023 paper "Predict, Refine, Synthesize: Self-Guiding Diffusion Models for Probabilistic Time Series Forecasting"

https://github.com/amazon-science/unconditional-time-series-diffusion

Science Score: 36.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.7%) to scientific vocabulary

Keywords

diffusion-models neurips neurips-2023 pytorch time-series time-series-forecasting
Last synced: 5 months ago · JSON representation

Repository

Official PyTorch implementation of TSDiff models presented in the NeurIPS 2023 paper "Predict, Refine, Synthesize: Self-Guiding Diffusion Models for Probabilistic Time Series Forecasting"

Basic Info
  • Host: GitHub
  • Owner: amazon-science
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 746 KB
Statistics
  • Stars: 203
  • Watchers: 5
  • Forks: 34
  • Open Issues: 8
  • Releases: 0
Topics
diffusion-models neurips neurips-2023 pytorch time-series time-series-forecasting
Created over 2 years ago · Last pushed about 2 years ago
Metadata Files
Readme Contributing License Code of conduct

README.md

TSDiff: An Unconditional Diffusion Model for Time Series

preprint License: MIT Venue:ICML 2023


Fig. 1: An overview of TSDiff’s use cases. Predict: By utilizing observation self-guidance, TSDiff can be conditioned during inference to perform predictive tasks such as forecasting. Refine: Predictions of base forecasters can be improved by leveraging the implicit probability density of TSDiff. Synthesize: Realistic samples generated by TSDiff can be used to train downstream forecasters achieving good performance on real test data.


This repository contains the official implementation of the NeurIPS 2023 paper Predict, Refine, Synthesize: Self-Guiding Diffusion Models for Probabilistic Time Series Forecasting. In this paper, we propose TSDiff, an unconditional diffusion model for time series. Our proposed self-guidance mechanism enables conditioning TSDiff for downstream tasks during inference, without requiring auxiliary networks or altering the training procedure. Furthermore, our refinement scheme leverages the implicit density learned by the diffusion model to iteratively refine the predictions of base forecasters. Finally, we demonstrate the high quality of the synthetic time series by training downstrain models solely on generated data and introducing the Linear Predictive Score (LPS).


Fig. 2: Example forecasts generated by TSDiff-Q for time series in Electricity, KDDCup, and Exchange — three datasets with different frequencies and/or prediction lengths.

Installation

TSDiff requires Python 3.8 or higher.

  • Create a conda environment (optional, but recommended). sh conda create --name tsdiff --yes python=3.8 && conda activate tsdiff
  • Install this package. sh pip install --editable "."

[!TIP]
We have some updates in the update branch. If you're interested in testing out TSDiff or training it on a custom dataset, using the update branch maybe faster for training.

Usage

Training Models

Train models using the train_model.py and train_cond_model.py scripts for TSDiff and TSDiff-Cond, respectively. Sample configurations can be found in configs/train_tsdiff.yaml and configs/train_tsdiff-cond.yaml. Specific configurations used in the paper can be found in configs/train_tsdiff and configs/train_tsdiff-cond.

Example commands for regular (i.e., no missing values) forecasting: ```sh

Train TSDiff on the Uber dataset for regular forecasting

python bin/trainmodel.py -c configs/traintsdiff/trainubertlc.yaml

Train TSDiff on the M4 dataset for regular forecasting

python bin/trainmodel.py -c configs/traintsdiff/train_m4.yaml

Train TSDiff-Cond on the Uber dataset for regular forecasting

python bin/traincondmodel.py -c configs/traintsdiff-cond/ubertlc_hourly.yaml

Train TSDiff-Cond on the M4 dataset for regular forecasting

python bin/traincondmodel.py -c configs/traintsdiff-cond/m4hourly.yaml ```

Example commands for forecasting with missing values: ```sh

Train TSDiff on the Uber dataset for the missing values experiment

python bin/trainmodel.py -c configs/traintsdiff/trainmissinguber_tlc.yaml

Train TSDiff on the KDDCup dataset for the missing values experiment

python bin/trainmodel.py -c configs/traintsdiff/trainmissingkdd_cup.yaml

Train TSDiff-Cond on the Uber dataset for the RM missing values experiment

python bin/traincondmodel.py -c configs/traintsdiff-cond/missingRMubertlc_hourly.yaml

Train TSDiff-Cond on the KDDCup dataset for the BM-B missing values experiment

python bin/traincondmodel.py -c configs/traintsdiff-cond/missingBM-Bkddcup2018without_missing.yaml

Train TSDiff-Cond on the KDDCup dataset for the BM-E missing values experiment

python bin/traincondmodel.py -c configs/traintsdiff-cond/missingBM-Ekddcup2018without_missing.yaml ``` Note that for TSDiff we train only one model and all the missing value scenarios are evaluated using the same unconditional model. However, for TSDiff-Cond, one model is trained per missingness scenario.

Evaluating Models

The unconditional models trained above can be used for the following tasks.

Predict using Observation Self-Guidance

Use the guidance_experiment.py script and configs/guidance.yaml config to run the forecasting experiments. Specific configurations used in the paper can be found in configs/guidance/.

Example commands: ```sh

Run observation self-guidance on the Solar dataset

python bin/guidanceexperiment.py -c configs/guidance/guidancesolar.yaml --ckpt /path/to/ckpt

Run observation self-guidance on the KDDCup dataset

python bin/guidanceexperiment.py -c configs/guidance/guidancekdd_cup.yaml --ckpt /path/to/ckpt ```

Refine Predictions of Base Forecasters

Use refinement_experiment.py script and configs/refinement.yaml config to run the refinement experiments. Specific configurations used in the paper can be found in configs/refinement/.

Example commands: ```sh

Refine predictions from the Linear model on the Solar dataset

python bin/refinementexperiment.py -c configs/refinement/solarnips-linear.yaml --ckpt /path/to/ckpt

Refine predictions from the DeepAR model on the M4 dataset

python bin/refinementexperiment.py -c configs/refinement/m4hourly-deepar.yaml --ckpt /path/to/ckpt ```

Train Downstream Models using Synthetic Data

Use tstr_experiment.py script and configs/tstr.yaml config to run the train on synthetic-test on real experiments. Specific configurations used in the paper can be found in configs/tstr/.

Example commands: ```sh

TSTR on the Solar Dataset

python bin/tstrexperiment.py -c configs/tstr/solarnips.yaml --ckpt /path/to/ckpt

TSTR on the KDDCup Dataset

python bin/tstrexperiment.py -c configs/tstr/kddcup2018without_missing.yaml --ckpt /path/to/ckpt ```

BibTeX

If you find this repository or the ideas presented in our paper useful, please consider citing.

@inproceedings{kollovieh2023predict, author = {Kollovieh, Marcel and Ansari, Abdul Fatir and Bohlke-Schneider, Michael and Zschiegner, Jasper and Wang, Hao and Wang, Yuyang}, title = {Predict, Refine, Synthesize: Self-Guiding Diffusion Models for Probabilistic Time Series Forecasting}, booktitle = {Advances in Neural Information Processing Systems}, year = {2023} }

Security

See CONTRIBUTING for more information.

License

This project is licensed under the Apache-2.0 License.

Owner

  • Name: Amazon Science
  • Login: amazon-science
  • Kind: organization

GitHub Events

Total
  • Issues event: 1
  • Watch event: 74
  • Issue comment event: 2
  • Fork event: 8
Last Year
  • Issues event: 1
  • Watch event: 74
  • Issue comment event: 2
  • Fork event: 8

Issues and Pull Requests

Last synced: over 1 year ago

All Time
  • Total issues: 19
  • Total pull requests: 3
  • Average time to close issues: about 2 months
  • Average time to close pull requests: about 1 hour
  • Total issue authors: 11
  • Total pull request authors: 2
  • Average comments per issue: 2.25
  • Average comments per pull request: 0.0
  • Merged pull requests: 2
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 20
  • Pull requests: 3
  • Average time to close issues: about 2 months
  • Average time to close pull requests: about 1 hour
  • Issue authors: 11
  • Pull request authors: 2
  • Average comments per issue: 2.25
  • Average comments per pull request: 0.0
  • Merged pull requests: 2
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • zzkkzz (2)
  • jimmylihui (1)
  • tomyjara (1)
  • Rekoko (1)
  • hmnza (1)
  • DennisZZQ (1)
  • hanlaoshi (1)
  • ClaraGrthns (1)
  • annis234 (1)
  • xiyuanssxx (1)
  • jm02058 (1)
  • hanli997 (1)
Pull Request Authors
  • abdulfatir (3)
Top Labels
Issue Labels
Pull Request Labels