deeppatch

incubator-farfetch

https://github.com/wsine/deeppatch

Science Score: 57.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 6 DOI reference(s) in README
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.3%) to scientific vocabulary
Last synced: 7 months ago · JSON representation ·

Repository

incubator-farfetch

Basic Info
  • Host: GitHub
  • Owner: Wsine
  • License: mit
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 5.03 MB
Statistics
  • Stars: 2
  • Watchers: 2
  • Forks: 0
  • Open Issues: 0
  • Releases: 3
Created almost 5 years ago · Last pushed about 2 years ago
Metadata Files
Readme License Citation

README.md

DeepPatch

Project Code: Farfetch'd

For the technical work, please refer to the following publication.

Publication

Zhengyuan Wei, Haipeng Wang, Imran Ashraf, and Wing-Kwong Chan. 2023. DeepPatch: Maintaining Deep Learning Model Programs to Retain Standard Accuracy with Substantial Robustness Improvement. ACM Trans. Softw. Eng. Methodol. 32, 6, Article 150 (November 2023), 49 pages. https://doi.org/10.1145/3604609

@article{10.1145/3604609, author = {Wei, Zhengyuan and Wang, Haipeng and Ashraf, Imran and Chan, Wing-Kwong}, title = {DeepPatch: Maintaining Deep Learning Model Programs to Retain Standard Accuracy with Substantial Robustness Improvement}, year = {2023}, issue_date = {November 2023}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, volume = {32}, number = {6}, issn = {1049-331X}, url = {https://doi.org/10.1145/3604609}, doi = {10.1145/3604609}, abstract = {Maintaining a deep learning (DL) model by making the model substantially more robust through retraining with plenty of adversarial examples of non-trivial perturbation strength often reduces the model’s standard accuracy. Many existing model repair or maintenance techniques sacrifice standard accuracy to produce a large gain in robustness or vice versa. This article proposes DeepPatch, a novel technique to maintain filter-intensive DL models. To the best of our knowledge, DeepPatch is the first work to address the challenge of standard accuracy retention while substantially improving the robustness of DL models with plenty of adversarial examples of non-trivial and diverse perturbation strengths. Rather than following the conventional wisdom to generalize all the components of a DL model over the union set of clean and adversarial samples, DeepPatch formulates a novel division of labor method to adaptively activate a subset of its inserted processing units to process individual samples. Its produced model can generate the original or replacement feature maps in each forward pass of the patched model, making the patched model carry an intrinsic property of behaving like the model under maintenance on demand. The overall experimental results show that DeepPatch successfully retains the standard accuracy of all pretrained models while improving the robustness accuracy substantially. However, the models produced by the peer techniques suffer from either large standard accuracy loss or small robustness improvement compared with the models under maintenance, rendering them unsuitable in general to replace the latter.}, journal = {ACM Trans. Softw. Eng. Methodol.}, month = {sep}, articleno = {150}, numpages = {49}, keywords = {accuracy recovery, maintenance, Model testing} }

Prerequisites

  • Nvidia CUDA
  • Python
  • Pipenv

The project is maintained with Pipenv, and is highly recommended for a python project. Please refer to the link for more description and installation.

This project is tested under Ubuntu18.04, Python 3.6 and CUDA 11.

Installation

It is easy and convenient to install all the same dependencies as the proposed with just one command.

bash pipenv sync

How to run

The project contains four stages.

mermaid graph LR pretrain --> assess --> correct --> evaluate

  • Pretrain (optional): automatically download the model the dataset and evaluate their pretrained performance.
  • Assess: prioritize the filters to be blamed
  • Correct: correct the model with patching units
  • Evaluate: evaluate the performance of patched model

Here, we take the resnet32 model and cifar10 dataset as an example.

The bootstrap commands are well organized with the help of pipenv.

You can list out and inspect the example commands with the below command.

```bash ~/workspace/deeppatch{main} > pipenv scripts Command Script


pretrain python src/eval.py -m resnet32 -d cifar10 assess python src/select.py -m resnet32 -d cifar10 -f perfloss correct python src/correct.py -m resnet32 -d cifar10 -f perfloss -c patch --crttype replace evaluate python src/switch.py -m resent32 -d cifar10 -f perfloss -c patch --crttype replace ```

To execute a single stage command, use pipenv run pretrain to bootstrap, and you may pass a flag --help to find and understand required and optional arguments.

```bash ~/workspace/deeppatch{main} > pipenv run assess --help Loading .env environment variables... usage: select.py [-h] [--datadir DATADIR] [--outputdir OUTPUTDIR] [--device {cpu,cuda}] [--gpu {0,1,2,3}] [-b BATCHSIZE] -m {resnet32,mobilenetv2x05,vgg13bn,shufflenetv2x10} [-r] -d {cifar10,cifar100} [-n {gaussion}] [--lr LR] [--momentum MOMENTUM] [--weightdecay WEIGHTDECAY] [-e MAX_EPOCH] -f {featswap,perfloss,ratioestim} -c {patch,finetune} [--crttype {crtunit,replace}] [--crtepoch CRTEPOCH] [--suspratio SUSPRATIO] [--suspside {front,rear,random}]

optional arguments: -h, --help show this help message and exit --datadir DATADIR --outputdir OUTPUTDIR --device {cpu,cuda} --gpu {0,1,2,3} -b BATCHSIZE, --batchsize BATCHSIZE -m {resnet32,mobilenetv2x05,vgg13bn,shufflenetv2x10}, --model {resnet32,mobilenetv2x05,vgg13bn,shufflenetv2x10} -r, --resume -f {featswap,perfloss,ratioestim}, --fsmethod {featswap,perfloss,ratioestim} -c {patch,finetune}, --crtmethod {patch,finetune} --crttype {crtunit,replace} --crtepoch CRTEPOCH --suspratio SUSPRATIO --susp_side {front,rear,random}

dataset: -d {cifar10,cifar100}, --dataset {cifar10,cifar100} -n {gaussion}, --noise_type {gaussion}

optimizer: --lr LR learning rate --momentum MOMENTUM --weightdecay WEIGHTDECAY -e MAXEPOCH, --maxepoch MAX_EPOCH ```

A progress bar will be shown and the results are logged under a default folder named output.

Owner

  • Name: Jankin Wei
  • Login: Wsine
  • Kind: user

To be simple, to be powerful.

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you conduct research with this technique, please cite it as below."
authors:
- family-names: "Wei"
  given-names: "Zhengyuan"
  orcid: "https://orcid.org/0000-0001-5966-1338"
title: "Implementation of DeepPatch"
version: 1.0.0
doi: 10.5281/zenodo.5401230
date-released: 2021-09-03
url: "https://github.com/Wsine/deeppatch"

GitHub Events

Total
  • Watch event: 1
Last Year
  • Watch event: 1

Committers

Last synced: 10 months ago

All Time
  • Total Commits: 57
  • Total Committers: 1
  • Avg Commits per committer: 57.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Wsine w****3@h****m 57

Issues and Pull Requests

Last synced: 10 months ago

All Time
  • Total issues: 1
  • Total pull requests: 0
  • Average time to close issues: 14 days
  • Average time to close pull requests: N/A
  • Total issue authors: 1
  • Total pull request authors: 0
  • Average comments per issue: 0.0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • Wsine (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

Pipfile.lock pypi
  • joblib ==1.1.0
  • numpy ==1.23.1
  • pillow ==9.2.0
  • scikit-learn ==1.1.1
  • scipy ==1.9.0
  • sklearn ==0.0
  • threadpoolctl ==3.1.0
  • torch ==1.8.1
  • torchvision ==0.9.1
  • tqdm ==4.64.0
  • typing-extensions ==4.3.0
Pipfile pypi
  • numpy *
  • sklearn *
  • torch ==1.8.1
  • torchvision ==0.9.1
  • tqdm *