amp

Heteroscedastic temperature estimation for OOD detection

https://github.com/llnl/amp

Science Score: 52.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
    Organization llnl has institutional domain (software.llnl.gov)
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (13.8%) to scientific vocabulary

Keywords

machine-learning neural-network
Last synced: 6 months ago · JSON representation ·

Repository

Heteroscedastic temperature estimation for OOD detection

Basic Info
  • Host: GitHub
  • Owner: LLNL
  • License: gpl-2.0
  • Language: Jupyter Notebook
  • Default Branch: main
  • Homepage:
  • Size: 554 MB
Statistics
  • Stars: 5
  • Watchers: 2
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Topics
machine-learning neural-network
Created over 3 years ago · Last pushed almost 3 years ago
Metadata Files
Readme License Citation

README.md

AMP

Code and models for Out of Distribution Detection with Neural Network Anchoring, in ACML 2022.

Dependencies

This package was built and tested using * Python 3.7.9 * Pytorch 1.13.1 * Torchvision 0.12.0 * Numpy 1.19.2

For logging and Config files we use yaml (5.4.1) and logging (0.5.1.2).
All of these can be installed (recommend a custom environment) using pip install -r requirements.txt.

Checkpoints and pre-trained models

Pre-trained (cifar10/100: ResNet34, WRN) to reproduce experiments from the paper can be downloaded from the Google Drive Link. The code assumes checkpoints are placed as follows: ckpt_save/in_dataset/modeltype_seed/model_name so for example, ckpts/cifar100/WideResNet_seed_1/ckpt-199.pth.

The tarball containing checkpoints already preserves this directory structure, and its location must be specified in the config.yml before evaluating. Please get in touch if you are interested in the ImageNet checkpoints!

Training your own anchored model

Converting an existing network to work with anchoring is very easy and can be done as follows: ``` from lib.utils.models import ResNet34 #import any CNN model to train from lib.AnchoringModel import ANT

net = ResNet34(nc=6,numclasses=10) #only modification is input has 2x channels as usual, so nc = 6. anchorednet = ANT(net) #everything else remains unchanged ... preds = anchored_net(images) loss = criterion(labels,preds) loss.backward() ```

It is recommended to use consistency during training, this can be easily done by obtaining predictions as preds = anchored_net(images,corrupt=True). For optimal performance, we use a schedule for corruption as corrupt = batch_idx%5==0 outputs = anchored_net(inputs,corrupt=corrupt)

LSUN Resizing Benchmark

We provide a new benchmark to test OOD robutness to resizing artifacts. This can be found in resize_ood/resize_benchmark.tar.gz. To use it, extract the dataset from the tarball and point to them in the config.yml file, before executing the main.py.

Reproducibility

We provide easy bash scripts to reproduce different tables/figures in the paper. These can be found and executed in reproducibility/. These depend on the pre-trained checkpoints provided, so they must first be downloaded. We also provide a separate config file with the exact settings used for our experiments.

Citation

If you use this code, please consider citing our paper as follows: ``` @inproceedings{anirudh2022out, title={Out of Distribution Detection via Neural Network Anchoring}, author={Anirudh, Rushil and Thiagarajan, Jayaraman J}, booktitle={Asian Conference on Machine Learning (ACML)}, year={2022}, organization={PMLR} }

```

License

This code is distributed under the terms of the GPL-2.0 license. All new contributions must be made under this license. LLNL-CODE-838619 SPDX-License-Identifier: GPL-2.0

Owner

  • Name: Lawrence Livermore National Laboratory
  • Login: LLNL
  • Kind: organization
  • Email: github-admin@llnl.gov
  • Location: Livermore, CA, USA

For over 70 years, the Lawrence Livermore National Laboratory has applied science and technology to make the world a safer place.

Citation (CITATION.cff)

cff-version: 1.2.0
authors:
- family-names: "Anirudh"
  given-names: "Rushil"
- family-names: "Thiagarajan"
  given-names: "Jayaraman J."
title: "Out of Distribution Detection via Neural Network Anchoring"
version: 0.1
doi: 
date-released: 2022-09-01
url: https://github.com/LLNL/AMP
preferred-citation:
  type: article
  authors:
  - family-names: "Anirudh"
    given-names: "Rushil"
  - family-names: "Thiagarajan"
    given-names: "Jayaraman J."
  title: "Out of Distribution Detection via Neural Network Anchoring"
  conference: "Asian Conference on Machine Learning (ACML)"
  title: "Out of Distribution Detection via Neural Network Anchoring"
  year: 2022
  organization: "PMLR"

GitHub Events

Total
Last Year

Issues and Pull Requests

Last synced: over 1 year ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels