https://github.com/bethgelab/adversarial-vision-challenge
NIPS Adversarial Vision Challenge
Science Score: 10.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
○codemeta.json file
-
○.zenodo.json file
-
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.7%) to scientific vocabulary
Keywords
Repository
NIPS Adversarial Vision Challenge
Basic Info
- Host: GitHub
- Owner: bethgelab
- Language: Python
- Default Branch: master
- Homepage: https://www.crowdai.org/challenges/nips-2018-adversarial-vision-challenge
- Size: 23.9 MB
Statistics
- Stars: 41
- Watchers: 12
- Forks: 12
- Open Issues: 8
- Releases: 0
Topics
Metadata Files
README.md
NIPS Adversarial Vision Challenge
Publication
https://arxiv.org/abs/1808.01976
Installation
To install the package simply run:
pip install adversarial-vision-challenge
This package contains helper functions to implement models and attacks that can be used with Python 2.7, 3.4, 3.5 and 3.6. Other Python versions might work as well. We recommend using Python 3!
Furthermore, this package also contains test scripts that should be used before submission to perform local tests of your model or attack. These test scripts are Python 3 only, because they depend on crowdai-repo2docker. See Running Tests Scripts section below for more detailed information.
Implementing a model
To run a model server, load your model and wrap it as a foolbox model.
Then pass the foolbox model to the model_server function.
```python from adversarialvisionchallenge import model_server
foolboxmodel = loadyourfoolboxmodel() modelserver(foolboxmodel) ```
Implementing an attack
To run an attack, use the load_model method to get a model instance that is callable to get the predicted labels.
```python from adversarialvisionchallenge.utils import readimages, storeadversarial from adversarialvisionchallenge.utils import load_model
model = load_model()
for (filename, image, label) in readimages(): # model is callable and returns the predicted class, # i.e. 0 <= model(image) < 200
# run your adversarial attack
adversarial = your_attack(model, image, label)
# store the adversarial
store_adversarial(file_name, adversarial)
### Running the tests scripts ```
Running Tests Scripts
The test scripts (running on your host machine) will need Python 3. Your model or attack running inside a docker container and using this package can use Python 2 or 3.
- To test a model, run the following:
avc-test-model . - To test an untargeted attack, run the following:
avc-test-untargeted-attack . - To test an targeted attack, run the following:
avc-test-targeted-attack .
within the folders you want to test.
In order for the attacks to work, your models / attack folders need to have the following structure: - for models: https://gitlab.crowdai.org/adversarial-vision-challenge/nips18-avc-model-template - for attacks: https://gitlab.crowdai.org/adversarial-vision-challenge/nips18-avc-attack-template
FAQ
Can you recommend some papers to get more familiar with adversarial examples, attacks and the threat model considered in this NIPS competition?
Have a look at our reading list that summarizes papers relevant for this competition.
How can I cite the competition in my own work?
@inproceedings{adversarial_vision_challenge,
title = {Adversarial Vision Challenge},
author = {Brendel, Wieland and Rauber, Jonas and Kurakin, Alexey and Papernot, Nicolas and Veliqi, Behar and Salath{\'e}, Marcel and Mohanty, Sharada P and Bethge, Matthias},
booktitle = {32nd Conference on Neural Information Processing Systems (NIPS 2018) Competition Track},
year = {2018},
url = {https://arxiv.org/abs/1808.01976}
}
Why can I not pass bounds = (0, 1) when creating the foolbox model?
We expect that all models process images that have values between 0 and 255. Therefore, we enforce that the model bounds are set to (0, 255). If your model expects images with values between 0 and 1, you can just pass bounds=(0, 255) and preprocessing=(0, 255), then the Foolbox model wrapper will divide all inputs by 255. Alternatively, you can leave preprocessing at (0, 1) and change your model to expect values between 0 and 255.
How is the score for an individual model, attack, image calculated?
We normalize the pixel values of the clean image and the adversarial to be between 0 and 1 and then take the L2 norm of the perturbation (adverarial - clean) as if they were vectors.
Owner
- Name: Bethge Lab
- Login: bethgelab
- Kind: organization
- Location: Tübingen
- Website: http://bethgelab.org
- Repositories: 23
- Profile: https://github.com/bethgelab
Perceiving Neural Networks
GitHub Events
Total
Last Year
Committers
Last synced: almost 3 years ago
All Time
- Total Commits: 162
- Total Committers: 10
- Avg Commits per committer: 16.2
- Development Distribution Score (DDS): 0.673
Top Committers
| Name | Commits | |
|---|---|---|
| Behar Veliqi | b****i@B****l | 53 |
| Behar Veliqi | b****r@v****e | 43 |
| Jonas Rauber | g****t@j****e | 21 |
| S.P. Mohanty | s****1@g****m | 17 |
| Wieland Brendel | w****l@p****e | 13 |
| Jonas Rauber | j****r@u****m | 8 |
| Wieland Brendel | w****l@u****m | 3 |
| wielandbrendel | w****l@n****g | 2 |
| Behar Veliqi | b****i@B****x | 1 |
| Behar Veliqi | b****i@B****x | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 7 months ago
All Time
- Total issues: 35
- Total pull requests: 13
- Average time to close issues: 19 days
- Average time to close pull requests: 5 days
- Total issue authors: 16
- Total pull request authors: 3
- Average comments per issue: 3.0
- Average comments per pull request: 1.77
- Merged pull requests: 13
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- spMohanty (10)
- bveliqi (5)
- csy530216 (4)
- ivan-v-kush (2)
- Maaater (2)
- zzd1992 (2)
- yxchng (1)
- KCC13 (1)
- sun201711 (1)
- walegahaha (1)
- ZiangYan (1)
- blgnksy (1)
- erko (1)
- klensink (1)
- yxd117 (1)
Pull Request Authors
- bveliqi (9)
- jonasrauber (3)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 117 last-month
- Total dependent packages: 0
- Total dependent repositories: 2
- Total versions: 18
- Total maintainers: 3
pypi.org: adversarial-vision-challenge
Tools for the NIPS Adversarial Vision Challenge. Includes an HTTP server and client that provides access to Foolbox models and attacks.
- Homepage: https://github.com/bethgelab/adversarial-vision-challenge
- Documentation: https://adversarial-vision-challenge.readthedocs.io/
- License: MIT
-
Latest release: 1.0.5
published over 7 years ago
Rankings
Maintainers (3)
Dependencies
- crowdai_api >=0.1.1.dev17 development
- flake8 >=3.3.0 development
- pytest >=3.1.0 development
- pytest-cov >=2.5.1 development
- python-coveralls >=2.9.1 development
- GitPython *
- bson *
- crowdai-repo2docker *
- crowdai_api *
- flask *
- foolbox *
- future *
- numpy *
- packaging *
- pillow *
- pyyaml *
- requests *
- setuptools *
- tqdm *