ood_detection_framework

Code to Confidence-based Out-of-Distribution Detection: A Comparative Study and Analysis

https://github.com/christophbrgr/ood_detection_framework

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.6%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Code to Confidence-based Out-of-Distribution Detection: A Comparative Study and Analysis

Basic Info
  • Host: GitHub
  • Owner: christophbrgr
  • License: mit
  • Language: Jupyter Notebook
  • Default Branch: main
  • Size: 1.38 MB
Statistics
  • Stars: 8
  • Watchers: 2
  • Forks: 0
  • Open Issues: 1
  • Releases: 0
Created over 4 years ago · Last pushed over 2 years ago
Metadata Files
Readme License Citation Security

README.md

OOD Detection Framework

CodeFactor

Code for our paper Confidence-based Out-of-Distribution Detection: A Comparative Study and Analysis which won the best paper award at the UNSURE 2021 workshop at MICCAI.

Berger, Christoph, Magdalini Paschali, Ben Glocker, and Konstantinos Kamnitsas. "Confidence-based Out-of-Distribution Detection: A Comparative Study and Analysis." arXiv preprint arXiv:2107.02568 (2021).

Intro

Methods

Implemented methods: * Maximum Class Probability * Deep Ensembles * ODIN * Mahalanobis Distance * Center Loss * Mahalanobis Ensemble * Monte Carlo Dropout * Monte Carlo Dropout Ensemble

Usage Instructions

Datasets

The code works with CIFAR10 vs SVHN and with CheXpert OOD detection tasks. CIFAR10 and SVHN are automatically downloaded via torchvision. For CheXpert, download CheXpert-v1.0-small and place the folder in datasets. The current default settings for CheXpert can be found in config\chexpert.py and are categorized in 2 settings we trained on (see table 2).

Requirements

This code was developed and tested on Python 3.8. Install all the required packages from requirements.txt. Both training and inference need a supported Nvidia GPU.

Training

Train new models using train_wrn_cifar10.py or train_wrn_chexpert.py. To adapt the default settings, edit the config files for the respective datasets. The defaults values yielded the best results in our training. You might have to adapt the batch size to fit the datasets on your GPU. We also made use of Weights and Biases for data logging, you can disable or enable that via a command line arg (--wandb).

Evaluation

Run eval_chexpert.py -h or eval_cifar10.py -h to see which methods you can run. You can also specify the exact IDs of pretrained models in the config to set which one the code loads.

Further Analysis and Files

Some of the jupyter notebooks used for plotting of figures or analysing the results in more depth are located in the notebooks directory. However, some of the code and contents are experimental and might not work as intended.

Reference Results

CIFAR10 vs SVHN

CIFAR10 Results

CheXpert

CheXpert  Results

Repositories that helped assemble this collection of baselines

  • WideResNet from https://github.com/meliketoy/wide-resnet.pytorch/blob/master/config.py
  • Values from SVHN for normalization: https://deepobs.readthedocs.io/en/develop/_modules/deepobs/pytorch/datasets/svhn.html
  • Mahalanobis: https://github.com/pokaxpoka/deepMahalanobisdetector
  • ODIN: https://github.com/facebookresearch/odin
  • DUQ: https://github.com/y0ast/deterministic-uncertainty-quantification

How to cite

Please cite as: Berger, Christoph, Magdalini Paschali, Ben Glocker, and Konstantinos Kamnitsas. "Confidence-based Out-of-Distribution Detection: A Comparative Study and Analysis." arXiv preprint arXiv:2107.02568 (2021). BibTeX: bibtex @article{berger2021confidence, title={Confidence-based Out-of-Distribution Detection: A Comparative Study and Analysis}, author={Berger, Christoph and Paschali, Magdalini and Glocker, Ben and Kamnitsas, Konstantinos}, journal={arXiv preprint arXiv:2107.02568}, year={2021} }

Owner

  • Name: Christoph Berger
  • Login: christophbrgr
  • Kind: user
  • Location: Munich, Germany
  • Company: Tanso Technologies

PM @tanso-technologies 🌳 Previously at TU Munich, BiomedIA at Imperial College London, CBIL at Harvard Med School and DFCI in Boston.

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: Berger
  affiliation: "Computer-Aided Medical Procedures, Technical University of Munich"
  given-names: Christoph
- affiliation: "Computer-Aided Medical Procedures, Technical University of Munich"
  family-names: Paschali
  given-names: Magdalini
- affiliation: "Imperial College London"
  family-names: Glocker
  given-names: Ben
- affiliation: "Imperial College London"
  family-names: Kamnitsas
  given-names: Konstantinos
title: "ood_detection_framework"
version: 1.0.0
date-released: 2021-07-28
url: "https://github.com/christophbrgr/ood_detection_framework"

GitHub Events

Total
Last Year

Dependencies

requirements.txt pypi
  • Pillow ==9.0.1
  • numpy ==1.22.0
  • pandas ==1.2.4
  • scikit_learn ==0.24.2
  • scipy ==1.6.3
  • torch ==1.8.1
  • torchvision ==0.9.1
  • tqdm ==4.61.2
  • wandb ==0.11.0
.github/workflows/codeql-analysis.yml actions
  • actions/checkout v2 composite
  • github/codeql-action/analyze v1 composite
  • github/codeql-action/autobuild v1 composite
  • github/codeql-action/init v1 composite