https://github.com/kundajelab/influence-release

https://github.com/kundajelab/influence-release

Science Score: 10.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.7%) to scientific vocabulary
Last synced: 4 months ago · JSON representation

Repository

Basic Info
  • Host: GitHub
  • Owner: kundajelab
  • License: mit
  • Language: Jupyter Notebook
  • Default Branch: master
  • Size: 114 MB
Statistics
  • Stars: 0
  • Watchers: 7
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Fork of kohpangwei/influence-release
Created about 8 years ago · Last pushed about 8 years ago

https://github.com/kundajelab/influence-release/blob/master/

# Understanding Black-box Predictions via Influence Functions

This code replicates the experiments from the following paper:

> Pang Wei Koh and Percy Liang
>
> [Understanding Black-box Predictions via Influence Functions](https://arxiv.org/abs/1703.04730)
>
> International Conference on Machine Learning (ICML), 2017.

We have a reproducible, executable, and Dockerized version of these scripts on [Codalab](https://worksheets.codalab.org/worksheets/0x2b314dc3536b482dbba02783a24719fd/).

The datasets for the experiments can also be found at the Codalab link.

Dependencies:
- Numpy/Scipy/Scikit-learn/Pandas
- Tensorflow (tested on v1.1.0)
- Keras (tested on v2.0.4)
- Spacy (tested on v1.8.2)
- h5py (tested on v2.7.0)
- Matplotlib/Seaborn (for visualizations)

A Dockerfile with these dependencies can be found here: https://hub.docker.com/r/pangwei/tf1.1/

---

In this paper, we use influence functions --- a classic technique from robust statistics --- 
to trace a model's prediction through the learning algorithm and back to its training data, 
thereby identifying training points most responsible for a given prediction.
To scale up influence functions to modern machine learning settings,
we develop a simple, efficient implementation that requires only oracle access to gradients 
and Hessian-vector products.
We show that even on non-convex and non-differentiable models
where the theory breaks down,
approximations to influence functions can still provide valuable information.
On linear models and convolutional neural networks,
we demonstrate that influence functions are useful for multiple purposes:
understanding model behavior, debugging models, detecting dataset errors,
and even creating visually-indistinguishable training-set attacks.

If you have questions, please contact Pang Wei Koh ().

Owner

  • Name: Kundaje Lab
  • Login: kundajelab
  • Kind: organization
  • Location: Stanford University

Compbio and machine learning code repositories from the Kundaje Lab at Stanford Genetics and Computer Science Depts.

GitHub Events

Total
Last Year