cvpr22w_robustnessthroughthelens

Official repository of our submission "Adversarial Robustness through the Lens of Convolutional Filters" for the CVPR2022 Workshop "The Art of Robustness: Devil and Angel in Adversarial Machine Learning Workshop"

https://github.com/paulgavrikov/cvpr22w_robustnessthroughthelens

Science Score: 41.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 3 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.7%) to scientific vocabulary

Keywords

adversarial-attacks cnn computer-vision convolution-filter convolutional-neural-networks cvpr2022 deep-learning machine-learning robustness robustness-analysis
Last synced: 6 months ago · JSON representation ·

Repository

Official repository of our submission "Adversarial Robustness through the Lens of Convolutional Filters" for the CVPR2022 Workshop "The Art of Robustness: Devil and Angel in Adversarial Machine Learning Workshop"

Basic Info
  • Host: GitHub
  • Owner: paulgavrikov
  • License: cc-by-sa-4.0
  • Language: Jupyter Notebook
  • Default Branch: main
  • Homepage:
  • Size: 13.2 MB
Statistics
  • Stars: 8
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
adversarial-attacks cnn computer-vision convolution-filter convolutional-neural-networks cvpr2022 deep-learning machine-learning robustness robustness-analysis
Created almost 4 years ago · Last pushed over 3 years ago
Metadata Files
Readme License Citation

README.md

Adversarial Robustness through the Lens of Convolutional Filters

Paul Gavrikov, Janis Keuper

CC BY-SA 4.0

Presented at: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) - The Art of Robustness: Devil and Angel in Adversarial Machine Learning Workshop

Paper | ArXiv | HQ Poster

This is a specialized article on Robustness, derived from our main paper: https://github.com/paulgavrikov/CNN-Filter-DB/

Abstract: Deep learning models are intrinsically sensitive to distribution shifts in the input data. In particular, small, barely perceivable perturbations to the input data can force models to make wrong predictions with high confidence. An common defense mechanism is regularization through adversarial training which injects worst-case perturbations back into training to strengthen the decision boundaries, and to reduce overfitting. In this context, we perform an investigation of 3x3 convolution filters that form in adversarially-trained models. Filters are extracted from 71 public models of the linf-RobustBench CIFAR-10/100 and ImageNet1k leaderboard and compared to filters extracted from models built on the same architectures but trained without robust regularization. We observe that adversarially-robust models appear to form more diverse, less sparse, and more orthogonal convolution filters than their normal counterparts. The largest differences between robust and normal models are found in the deepest layers, and the very first convolution layer, which consistently and predominantly forms filters that can partially eliminate perturbations, irrespective of the architecture.

Activation of first stage filters

Data

Download the dataset from here https://zenodo.org/record/6414075.

Citation

If you find our work useful in your research, please consider citing:

@InProceedings{Gavrikov_2022a_CVPR, author = {Gavrikov, Paul and Keuper, Janis}, title = {Adversarial Robustness Through the Lens of Convolutional Filters}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2022}, pages = {139-147} } Dataset: DOI

Legal

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

Owner

  • Name: Paul Gavrikov
  • Login: paulgavrikov
  • Kind: user
  • Location: Germany
  • Company: Offenburg University

Citation (citations.ipynb)

{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "0d69db81",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "from multiprocessing import Pool\n",
    "import matplotlib.pyplot as plt\n",
    "import matplotlib.gridspec as grid_spec\n",
    "from tqdm.auto import tqdm\n",
    "from math import ceil\n",
    "import itertools\n",
    "import h5py\n",
    "import io\n",
    "import robustbench"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "d3554f53",
   "metadata": {},
   "outputs": [],
   "source": [
    "dataset_path = \"/data/output/20220226_robustness/dataset.h5\"\n",
    "df_meta = pd.read_hdf(dataset_path, \"meta\")\n",
    "df_meta.Robust = df_meta.Robust.apply(bool)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "36598d0c",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array(['https://openreview.net/forum?id=SHB_znlW5G7',\n",
       "       'https://arxiv.org/abs/2002.11569',\n",
       "       'https://github.com/MadryLab/robustness',\n",
       "       'https://arxiv.org/abs/1901.09960',\n",
       "       'https://arxiv.org/abs/2010.01278',\n",
       "       'https://openreview.net/forum?id=HkeryxBtPB',\n",
       "       'https://arxiv.org/abs/2007.02617',\n",
       "       'https://arxiv.org/abs/2001.03994',\n",
       "       'https://arxiv.org/abs/2103.01946',\n",
       "       'https://arxiv.org/abs/2007.08489',\n",
       "       'https://arxiv.org/abs/2004.05884',\n",
       "       'https://arxiv.org/abs/2104.09425',\n",
       "       'https://arxiv.org/abs/2010.03593', nan,\n",
       "       'https://arxiv.org/abs/1905.13736',\n",
       "       'https://arxiv.org/abs/2110.05626',\n",
       "       'https://arxiv.org/abs/2111.02331',\n",
       "       'https://openreview.net/forum?id=BuD2LmNaU3a',\n",
       "       'https://arxiv.org/abs/2011.11164',\n",
       "       'https://arxiv.org/abs/1901.08573',\n",
       "       'https://arxiv.org/abs/2110.03825',\n",
       "       'https://arxiv.org/abs/2003.12862',\n",
       "       'https://arxiv.org/abs/2110.09468',\n",
       "       'https://arxiv.org/abs/2106.02078',\n",
       "       'https://arxiv.org/abs/2002.10319',\n",
       "       'https://arxiv.org/abs/2002.08619',\n",
       "       'https://arxiv.org/abs/2002.10509',\n",
       "       'https://arxiv.org/abs/2003.09347',\n",
       "       'https://openreview.net/forum?id=rklOg6EFwS',\n",
       "       'https://arxiv.org/abs/1905.00877',\n",
       "       'https://arxiv.org/abs/2002.11242',\n",
       "       'https://arxiv.org/abs/2010.01736'], dtype=object)"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "df_meta.Paper.unique()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "f92a5867",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "@misc{rebuffi2021fixing,\n",
      "      title={Fixing Data Augmentation to Improve Adversarial Robustness}, \n",
      "      author={Sylvestre-Alvise Rebuffi and Sven Gowal and Dan A. Calian and Florian Stimberg and Olivia Wiles and Timothy Mann},\n",
      "      year={2021},\n",
      "      eprint={2103.01946},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.CV}\n",
      "}\n",
      "@misc{huang2022exploring,\n",
      "      title={Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks}, \n",
      "      author={Hanxun Huang and Yisen Wang and Sarah Monazam Erfani and Quanquan Gu and James Bailey and Xingjun Ma},\n",
      "      year={2022},\n",
      "      eprint={2110.03825},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.LG}\n",
      "}\n",
      "@misc{zhang2020attacks,\n",
      "      title={Attacks Which Do Not Kill Training Make Adversarial Learning Stronger}, \n",
      "      author={Jingfeng Zhang and Xilie Xu and Bo Han and Gang Niu and Lizhen Cui and Masashi Sugiyama and Mohan Kankanhalli},\n",
      "      year={2020},\n",
      "      eprint={2002.11242},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.LG}\n",
      "}\n",
      "@misc{salman2020adversarially,\n",
      "      title={Do Adversarially Robust ImageNet Models Transfer Better?}, \n",
      "      author={Hadi Salman and Andrew Ilyas and Logan Engstrom and Ashish Kapoor and Aleksander Madry},\n",
      "      year={2020},\n",
      "      eprint={2007.08489},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.CV}\n",
      "}\n",
      "@misc{zhang2019propagate,\n",
      "      title={You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle}, \n",
      "      author={Dinghuai Zhang and Tianyuan Zhang and Yiping Lu and Zhanxing Zhu and Bin Dong},\n",
      "      year={2019},\n",
      "      eprint={1905.00877},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={stat.ML}\n",
      "}\n",
      "@misc{hendrycks2019using,\n",
      "      title={Using Pre-Training Can Improve Model Robustness and Uncertainty}, \n",
      "      author={Dan Hendrycks and Kimin Lee and Mantas Mazeika},\n",
      "      year={2019},\n",
      "      eprint={1901.09960},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.LG}\n",
      "}\n",
      "@misc{zhang2021geometryaware,\n",
      "      title={Geometry-aware Instance-reweighted Adversarial Training}, \n",
      "      author={Jingfeng Zhang and Jianing Zhu and Gang Niu and Bo Han and Masashi Sugiyama and Mohan Kankanhalli},\n",
      "      year={2021},\n",
      "      eprint={2010.01736},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.LG}\n",
      "}\n",
      "@misc{chen2021ltd,\n",
      "      title={LTD: Low Temperature Distillation for Robust Adversarial Training}, \n",
      "      author={Erh-Chung Chen and Che-Rung Lee},\n",
      "      year={2021},\n",
      "      eprint={2111.02331},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.CV}\n",
      "}\n",
      "@misc{andriushchenko2020understanding,\n",
      "      title={Understanding and Improving Fast Adversarial Training}, \n",
      "      author={Maksym Andriushchenko and Nicolas Flammarion},\n",
      "      year={2020},\n",
      "      eprint={2007.02617},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.LG}\n",
      "}\n",
      "@misc{cui2021learnable,\n",
      "      title={Learnable Boundary Guided Adversarial Training}, \n",
      "      author={Jiequan Cui and Shu Liu and Liwei Wang and Jiaya Jia},\n",
      "      year={2021},\n",
      "      eprint={2011.11164},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.CV}\n",
      "}\n",
      "@misc{rice2020overfitting,\n",
      "      title={Overfitting in adversarially robust deep learning}, \n",
      "      author={Leslie Rice and Eric Wong and J. Zico Kolter},\n",
      "      year={2020},\n",
      "      eprint={2002.11569},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.LG}\n",
      "}\n",
      "@misc{dai2021parameterizing,\n",
      "      title={Parameterizing Activation Functions for Adversarial Robustness}, \n",
      "      author={Sihui Dai and Saeed Mahloujifar and Prateek Mittal},\n",
      "      year={2021},\n",
      "      eprint={2110.05626},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.LG}\n",
      "}\n",
      "@misc{gowal2021uncovering,\n",
      "      title={Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples}, \n",
      "      author={Sven Gowal and Chongli Qin and Jonathan Uesato and Timothy Mann and Pushmeet Kohli},\n",
      "      year={2021},\n",
      "      eprint={2010.03593},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={stat.ML}\n",
      "}\n",
      "@misc{sitawarin2021sat,\n",
      "      title={SAT: Improving Adversarial Training via Curriculum-Based Loss Smoothing}, \n",
      "      author={Chawin Sitawarin and Supriyo Chakraborty and David Wagner},\n",
      "      year={2021},\n",
      "      eprint={2003.09347},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.LG}\n",
      "}\n",
      "@misc{chen2021efficient,\n",
      "      title={Efficient Robust Training via Backward Smoothing}, \n",
      "      author={Jinghui Chen and Yu Cheng and Zhe Gan and Quanquan Gu and Jingjing Liu},\n",
      "      year={2021},\n",
      "      eprint={2010.01278},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.LG}\n",
      "}\n",
      "@misc{zhang2019theoretically,\n",
      "      title={Theoretically Principled Trade-off between Robustness and Accuracy}, \n",
      "      author={Hongyang Zhang and Yaodong Yu and Jiantao Jiao and Eric P. Xing and Laurent El Ghaoui and Michael I. Jordan},\n",
      "      year={2019},\n",
      "      eprint={1901.08573},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.LG}\n",
      "}\n",
      "@misc{wu2020adversarial,\n",
      "      title={Adversarial Weight Perturbation Helps Robust Generalization}, \n",
      "      author={Dongxian Wu and Shu-tao Xia and Yisen Wang},\n",
      "      year={2020},\n",
      "      eprint={2004.05884},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.LG}\n",
      "}\n",
      "@misc{wong2020fast,\n",
      "      title={Fast is better than free: Revisiting adversarial training}, \n",
      "      author={Eric Wong and Leslie Rice and J. Zico Kolter},\n",
      "      year={2020},\n",
      "      eprint={2001.03994},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.LG}\n",
      "}\n",
      "@misc{huang2020selfadaptive,\n",
      "      title={Self-Adaptive Training: beyond Empirical Risk Minimization}, \n",
      "      author={Lang Huang and Chao Zhang and Hongyang Zhang},\n",
      "      year={2020},\n",
      "      eprint={2002.10319},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.LG}\n",
      "}\n",
      "@misc{carmon2022unlabeled,\n",
      "      title={Unlabeled Data Improves Adversarial Robustness}, \n",
      "      author={Yair Carmon and Aditi Raghunathan and Ludwig Schmidt and Percy Liang and John C. Duchi},\n",
      "      year={2022},\n",
      "      eprint={1905.13736},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={stat.ML}\n",
      "}\n",
      "@misc{pang2020boosting,\n",
      "      title={Boosting Adversarial Training with Hypersphere Embedding}, \n",
      "      author={Tianyu Pang and Xiao Yang and Yinpeng Dong and Kun Xu and Jun Zhu and Hang Su},\n",
      "      year={2020},\n",
      "      eprint={2002.08619},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.LG}\n",
      "}\n",
      "@misc{gowal2021improving,\n",
      "      title={Improving Robustness using Generated Data}, \n",
      "      author={Sven Gowal and Sylvestre-Alvise Rebuffi and Olivia Wiles and Florian Stimberg and Dan Andrei Calian and Timothy Mann},\n",
      "      year={2021},\n",
      "      eprint={2110.09468},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.LG}\n",
      "}\n",
      "@misc{sehwag2020hydra,\n",
      "      title={HYDRA: Pruning Adversarially Robust Neural Networks}, \n",
      "      author={Vikash Sehwag and Shiqi Wang and Prateek Mittal and Suman Jana},\n",
      "      year={2020},\n",
      "      eprint={2002.10509},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.CV}\n",
      "}\n",
      "@misc{sridhar2021improving,\n",
      "      title={Improving Neural Network Robustness via Persistency of Excitation}, \n",
      "      author={Kaustubh Sridhar and Oleg Sokolsky and Insup Lee and James Weimer},\n",
      "      year={2021},\n",
      "      eprint={2106.02078},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={stat.ML}\n",
      "}\n",
      "@misc{chen2020adversarial,\n",
      "      title={Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning}, \n",
      "      author={Tianlong Chen and Sijia Liu and Shiyu Chang and Yu Cheng and Lisa Amini and Zhangyang Wang},\n",
      "      year={2020},\n",
      "      eprint={2003.12862},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.CV}\n",
      "}\n",
      "@misc{sehwag2021robust,\n",
      "      title={Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?}, \n",
      "      author={Vikash Sehwag and Saeed Mahloujifar and Tinashe Handina and Sihui Dai and Chong Xiang and Mung Chiang and Prateek Mittal},\n",
      "      year={2021},\n",
      "      eprint={2104.09425},\n",
      "      archivePrefix={arXiv},\n",
      "      primaryClass={cs.LG}\n",
      "}\n"
     ]
    }
   ],
   "source": [
    "import re\n",
    "import urllib.request as libreq\n",
    "\n",
    "ids = []\n",
    "\n",
    "paper_map = {}\n",
    "\n",
    "for s in df_meta.Paper.unique():\n",
    "    if type(s) is str:\n",
    "        match = re.findall(r\"arxiv.*([0-9]{4}.[0-9]{4,5})\", s)\n",
    "        if len(match) > 0:\n",
    "            idd = match[0]\n",
    "            ids.append(idd)\n",
    "            \n",
    "for idd in set(ids):\n",
    "    url = f\"https://arxiv.org/bibtex/{idd}\"\n",
    "    with libreq.urlopen(url) as data:\n",
    "        citation = data.read().decode(\"utf-8\")\n",
    "        print(citation)\n",
    "        paper_map[idd] = citation"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}

GitHub Events

Total
  • Watch event: 1
Last Year
  • Watch event: 1

Dependencies

requirements.txt pypi
  • KDEpy ==1.1.0
  • colorcet ==2.0.6
  • easydict *
  • fast-histogram ==0.10
  • h5py ==3.1.0
  • matplotlib ==3.4.1
  • numpy ==1.21.2
  • numpy *
  • onnx *
  • onnxruntime *
  • pandas *
  • pandas ==1.1.4
  • robustbench *
  • scikit-learn ==0.24.1
  • scipy *
  • tables ==3.7.0
  • tdqm *
  • torch *
  • torchvision *
  • tqdm ==4.53.0
retrain_robustbench/requirements.txt pypi
  • geotorch *
  • pytorch-lightning ==1.6.0
  • torch *
  • torchdiffeq *
  • torchmetrics ==0.7.0
  • torchvision *