https://github.com/longxingtan/time-series-prediction
tfts: Time Series Deep Learning Models in TensorFlow
Science Score: 26.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.7%) to scientific vocabulary
Keywords
Repository
tfts: Time Series Deep Learning Models in TensorFlow
Basic Info
- Host: GitHub
- Owner: LongxingTan
- License: mit
- Language: Python
- Default Branch: master
- Homepage: https://time-series-prediction.readthedocs.io/en/latest/
- Size: 3.07 MB
Statistics
- Stars: 857
- Watchers: 22
- Forks: 169
- Open Issues: 12
- Releases: 14
Topics
Metadata Files
README.md
Documentation | Tutorials | Release Notes | 中文
TFTS (TensorFlow Time Series) is an easy-to-use time series package, supporting the classical and latest deep learning methods in TensorFlow or Keras. - Support sota models for time series tasks (prediction, classification, anomaly detection) - Provide advanced deep learning models for industry, research and competition - Documentation lives at time-series-prediction.readthedocs.io
Tutorial
Installation
- python >= 3.7
- tensorflow >= 2.4
shell
pip install tfts
Quick start
```python import matplotlib.pyplot as plt import tensorflow as tf import tfts from tfts import AutoModel, AutoConfig, KerasTrainer
trainlength = 24 predictsequencelength = 8 (xtrain, ytrain), (xvalid, yvalid) = tfts.getdata("sine", trainlength, predictsequencelength, testsize=0.2)
modelnameorpath = 'seq2seq' # 'wavenet', 'transformer', 'rnn', 'tcn', 'bert', 'dlinear', 'nbeats', 'informer', 'autoformer' config = AutoConfig.formodel(modelnameorpath) model = AutoModel.fromconfig(config, predictsequencelength=predictsequencelength) trainer = KerasTrainer(model, optimizer=tf.keras.optimizers.Adam(0.0007)) trainer.train((xtrain, ytrain), (xvalid, yvalid), epochs=30)
pred = trainer.predict(xvalid) trainer.plot(history=xvalid, true=y_valid, pred=pred) plt.show() ```
Prepare your own data
You could train your own data by preparing 3D data as inputs, for both inputs and targets
- option1 np.ndarray
- option2 tf.data.Dataset
Encoder only model inputs
```python import numpy as np from tfts import AutoConfig, AutoModel, KerasTrainer
trainlength = 24 predictsequencelength = 8 nfeature = 2
xtrain = np.random.rand(1, trainlength, nfeature) # inputs: (batch, trainlength, feature) ytrain = np.random.rand(1, predictsequencelength, 1) # target: (batch, predictsequencelength, 1) xvalid = np.random.rand(1, trainlength, nfeature) yvalid = np.random.rand(1, predictsequence_length, 1)
config = AutoConfig.formodel('rnn') model = AutoModel.fromconfig(config, predictsequencelength=predictsequencelength) trainer = KerasTrainer(model) trainer.train(traindataset=(xtrain, ytrain), validdataset=(xvalid, yvalid), epochs=1) ```
Encoder-decoder model inputs
```python
option1: np.ndarray
import numpy as np from tfts import AutoConfig, AutoModel, KerasTrainer
trainlength = 24 predictsequencelength = 8 nencoderfeature = 2 ndecoder_feature = 3
xtrain = ( np.random.rand(1, trainlength, 1), # inputs: (batch, trainlength, 1) np.random.rand(1, trainlength, nencoderfeature), # encoderfeature: (batch, trainlength, encoderfeatures) np.random.rand(1, predictsequencelength, ndecoderfeature), # decoderfeature: (batch, predictsequencelength, decoderfeatures) ) ytrain = np.random.rand(1, predictsequencelength, 1) # target: (batch, predictsequencelength, 1)
xvalid = ( np.random.rand(1, trainlength, 1), np.random.rand(1, trainlength, nencoderfeature), np.random.rand(1, predictsequencelength, ndecoderfeature), ) yvalid = np.random.rand(1, predictsequencelength, 1)
config = AutoConfig.formodel("seq2seq") model = AutoModel.fromconfig(config, predictsequencelength=predictsequencelength) trainer = KerasTrainer(model) trainer.train((xtrain, ytrain), (xvalid, yvalid), epochs=1) ```
```python
option2: tf.data.Dataset
import numpy as np import tensorflow as tf from tfts import AutoConfig, AutoModel, KerasTrainer
class FakeReader(object): def init(self, predictsequencelength): trainlength = 24 nencoderfeature = 2 ndecoderfeature = 3 self.x = np.random.rand(15, trainlength, 1) self.encoderfeature = np.random.rand(15, trainlength, nencoderfeature) self.decoderfeature = np.random.rand(15, predictsequencelength, ndecoderfeature) self.target = np.random.rand(15, predictsequence_length, 1)
def __len__(self):
return len(self.x)
def __getitem__(self, idx):
return {
"x": self.x[idx],
"encoder_feature": self.encoder_feature[idx],
"decoder_feature": self.decoder_feature[idx],
}, self.target[idx]
def iter(self):
for i in range(len(self.x)):
yield self[i]
predictsequencelength = 10 trainreader = FakeReader(predictsequencelength=predictsequencelength) trainloader = tf.data.Dataset.fromgenerator( trainreader.iter, ({"x": tf.float32, "encoderfeature": tf.float32, "decoderfeature": tf.float32}, tf.float32), ) trainloader = trainloader.batch(batchsize=1) validreader = FakeReader(predictsequencelength=predictsequencelength) validloader = tf.data.Dataset.fromgenerator( validreader.iter, ({"x": tf.float32, "encoderfeature": tf.float32, "decoderfeature": tf.float32}, tf.float32), ) validloader = validloader.batch(batchsize=1)
config = AutoConfig.formodel("seq2seq") model = AutoModel.fromconfig(config, predictsequencelength=predictsequencelength) trainer = KerasTrainer(model) trainer.train(traindataset=trainloader, validdataset=validloader, epochs=1) ```
Prepare custom model config
```python from tfts import AutoModel, AutoConfig
config = AutoConfig.formodel('rnn') print(config) config.rnnhidden_size = 128
model = AutoModel.fromconfig(config, predictsequence_length=7) ```
Build your own model
Full list of tfts AutoModel supported
- rnn - tcn - bert - nbeats - dlinear - seq2seq - wavenet - transformer - informer - autoformerYou could build the custom model based on tfts, like - add custom-defined embeddings for categorical variables - add custom-defined head layers for classification or anomaly task
```python import tensorflow as tf from tensorflow.keras.layers import Input, Dense from tfts import AutoModel, AutoConfig
trainlength = 24 numtrainfeatures = 15 predictsequence_length = 8
def buildmodel(): inputs = Input([trainlength, numtrainfeatures]) config = AutoConfig.formodel("seq2seq") backbone = AutoModel.fromconfig(config, predictsequencelength=predictsequencelength) outputs = backbone(inputs) outputs = Dense(1, activation="sigmoid")(outputs) model = tf.keras.Model(inputs=inputs, outputs=outputs) model.compile(loss="mse", optimizer="rmsprop") return model ```
Examples
- TFTS-Bert wins the 3rd place in KDD Cup 2022-wind power forecasting
- TFTS-Seq2seq wins the 4th place in Tianchi-ENSO index prediction 2021
- More examples ...
Citation
If you find tfts project useful in your research, please consider cite:
@misc{tfts2020,
author = {Longxing Tan},
title = {Time series prediction},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/longxingtan/time-series-prediction}},
}
Owner
- Login: LongxingTan
- Kind: user
- Location: China
- Repositories: 6
- Profile: https://github.com/LongxingTan
A slow researcher
GitHub Events
Total
- Create event: 21
- Issues event: 1
- Release event: 5
- Watch event: 45
- Delete event: 18
- Issue comment event: 21
- Push event: 228
- Pull request event: 34
- Fork event: 6
Last Year
- Create event: 21
- Issues event: 1
- Release event: 5
- Watch event: 45
- Delete event: 18
- Issue comment event: 21
- Push event: 228
- Pull request event: 34
- Fork event: 6
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| LongxingTan | t****8@1****m | 60 |
| Longxing Tan | l****n@L****l | 4 |
| Hongying Yue | 4****e | 1 |
| Longxing Tan | l****n@t****l | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 20
- Total pull requests: 60
- Average time to close issues: 9 months
- Average time to close pull requests: 4 days
- Total issue authors: 19
- Total pull request authors: 4
- Average comments per issue: 2.6
- Average comments per pull request: 0.7
- Merged pull requests: 49
- Bot issues: 0
- Bot pull requests: 5
Past Year
- Issues: 1
- Pull requests: 29
- Average time to close issues: N/A
- Average time to close pull requests: about 3 hours
- Issue authors: 1
- Pull request authors: 3
- Average comments per issue: 0.0
- Average comments per pull request: 0.69
- Merged pull requests: 27
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- forestbat (2)
- raphaelcp (1)
- sifuhr (1)
- XionZi (1)
- chamberSccc (1)
- zhangxjohn (1)
- MichaelRinger (1)
- Dylan-Dyb (1)
- pavelxx1 (1)
- swift88-clone (1)
- gitgud (1)
- Albert0sans (1)
- Nodon447 (1)
- helonin (1)
- dingchaoyue (1)
Pull Request Authors
- LongxingTan (51)
- dependabot[bot] (5)
- hongyingyue (2)
- imu1984 (2)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 157 last-month
- Total docker downloads: 343
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 20
- Total maintainers: 1
pypi.org: tfts
Deep learning time series with TensorFlow
- Homepage: https://time-series-prediction.readthedocs.io
- Documentation: https://time-series-prediction.readthedocs.io
- License: MIT License
-
Latest release: 0.0.19
published 7 months ago
Rankings
Maintainers (1)
Dependencies
- actions/checkout v3 composite
- github/codeql-action/analyze v2 composite
- github/codeql-action/autobuild v2 composite
- github/codeql-action/init v2 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- actions/cache v2 composite
- actions/checkout v2 composite
- actions/setup-python v2 composite
- actions/setup-python v1 composite
- actions/upload-artifact v2 composite
- codecov/codecov-action v3 composite
- tensorflow/tensorflow 2.8.3-gpu build
- cloudpickle *
- docutils *
- ipython *
- matplotlib *
- nbconvert >=6.3.0
- nbsphinx *
- optuna >=2.0
- pandas ==1.1.5
- pandoc *
- pydata_sphinx_theme *
- recommonmark >=0.7.1
- scikit-learn >0.23
- sphinx >3.2
- sphinx-autobuild *
- sphinx_markdown_tables *
- tensorflow ==2.7.1
- 165 dependencies
- black * develop
- coverage * develop
- flake8 * develop
- invoke * develop
- ipykernel * develop
- ipywidgets ^8.0.1 develop
- isort * develop
- mypy * develop
- nbsphinx * develop
- pre-commit ^2.20.0 develop
- pydata-sphinx-theme * develop
- pylint * develop
- recommonmark * develop
- sphinx * develop
- tensorflow ^2.3.1 develop
- matplotlib *
- numpy *
- optuna ^2.3.0
- pandas ^1.2.0
- python >=3.7.1,<3.11
- joblib *
- matplotlib *
- optuna >=2.0
- pandas >=1.0
- scikit-learn >0.23
- tensorflow >=2.3.1