timeseries-explain

Timeseries Explain

https://github.com/kingspp/timeseries-explain

Science Score: 26.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.4%) to scientific vocabulary

Keywords

explainable-ai time-series-explanation xai
Last synced: 6 months ago · JSON representation

Repository

Timeseries Explain

Basic Info
Statistics
  • Stars: 8
  • Watchers: 1
  • Forks: 2
  • Open Issues: 2
  • Releases: 0
Topics
explainable-ai time-series-explanation xai
Created over 4 years ago · Last pushed over 3 years ago
Metadata Files
Readme License Citation

README.md

version badge build coverage badge test badge docs badge commits badge footprint badge Say Thanks! <!-- End of Badges -->

PERT - PErturbation by Prioritized ReplacemenT

Prathyush Parvatharaju, Ramesh Doddaiah, Tom Hartvigsen, Elke Rundensteiner

Paper: #insert link

Usage

TS-Explain supports command line usage and provides python based API.

Command Line Usage

```bash pip install tsexp

Explain a single instance and output to an image

tsexp -i 131 -f data.csv -m xyz.model -o saliency.png

Explain a dataset

tsexp -a pert -f data.csv -m xyz.model -o saliency.csv ```

Python API

```python from tsexp import PERT

Explain a single instance

saliency = PERT.explain_instance(...)

Explain Dataset

saliencies = PERT.explain(...) ```

API Documentation

Timeseries Explain

Abstract

Explainable classification is essential to high-impact settings where practitioners require evidence to support their decisions. However, state-of-the-art deep learning models suffer from a lack of trans- parency in how they derive their predictions. One common form of explainability, termed attribution-based explainability, identi- fies which input features are used by the classifier for its predic- tion. While such explainability for image classifiers has recently received focus, little work has been done to-date to address ex- plainability for deep time series classifiers. In this work, we thus propose PERT, a novel perturbation-based explainability method designed explicitly for time series that can explain any classifier. PERT adaptively learns to perform timestep-specific interpolation to perturb instances and explain a black-box models predictions for a given instance, learning which timesteps lead to different be- havior in the classifiers predictions. For this, PERT pairs two novel complementary techniques into an integrated architecture: a Priori- tized Replacement Selector that learns to select the best replacement time series from the background dataset specific to the instance-of- interest with a novel and learnable Guided-Perturbation Function, that uses the replacement time series to carefully perturb an input instances timesteps and discover the impact of each timestep on a black-box classifiers final prediction. Across our experiments recording three metrics on nine publicly-available datasets, we find that PERT consistently outperforms the state-of-the-art explain- ability methods. We also show a case study using the CricketX dataset that demonstrates PERT succeeds in finding the relevant regions of gesture recognition time series.

Requirements

Python 3.7+

Development

```bash

Bare installation

git clone https://github.com/kingspp/pert

With pre-trained models and datasets

git clone --recurse-submodules -j8 https://github.com/kingspp/pert

Install requirements

cd pert && pip install -r requirements.txt ```

Reproduction

bash python3 main.py --pname TEST --task_id10 \ --run_mode turing --jobs_per_task 20 \ --algo pert \ --dataset wafer \ --enable_dist False \ --enable_lr_decay False \ --grad_replacement random_instance \ --eval_replacement class_mean \ --background_data_perc 100 \ --run_eval True \ --enable_seed True \ --w_decay 0.00 \ --bbm dnn \ --max_itr 500

Cite

bash @inproceedings{parvatharaju2021learning, author = {Parvatharaju, Prathyush and Doddaiah, Ramesh and Hartvigsen, Thomas and Rundensteiner, Elke}, title = {Learning Saliency Maps for Deep Time Series Classifiers}, booktitle = {ACM International Conference on Information and Knowledge Management}, year = 2021, }

Owner

  • Name: Prathyush SP
  • Login: kingspp
  • Kind: user
  • Location: Boston, Massachusetts
  • Company: CodaMetrix

Sr. Machine Learning Engineer @CodaMetrix, Adjunct Lecturer @WPI

GitHub Events

Total
Last Year