attention-meets-perturbation

📝 Official Implementation of "Attention Meets Perturbation: Robust and Interpretable Attention with Adversarial Training"

https://github.com/shunk031/attention-meets-perturbation

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 8 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (8.5%) to scientific vocabulary

Keywords

adversarial-training allennlp attention-mechanism binary-classification ieee interpretability natural-language-inference natural-language-processing pytorch question-answering robustness
Last synced: 6 months ago · JSON representation ·

Repository

📝 Official Implementation of "Attention Meets Perturbation: Robust and Interpretable Attention with Adversarial Training"

Basic Info
Statistics
  • Stars: 7
  • Watchers: 3
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Topics
adversarial-training allennlp attention-mechanism binary-classification ieee interpretability natural-language-inference natural-language-processing pytorch question-answering robustness
Created over 5 years ago · Last pushed over 4 years ago
Metadata Files
Readme License Citation

README.md

Attention Meets Perturbations: Robust and Interpretable Attention with Adversarial Training

CoRR preprint arXiv:2009.12064 IEEE Access Demo Page Build

| | | |----------------------------------------|-------------------------------------------| | model| Figure 1 |

Attention Meets Perturbations: Robust and Interpretable Attention with Adversarial Training
Shunsuke Kitada and Hitoshi Iyatomi

  • Preprint: https://arxiv.org/abs/2009.12064
  • Accepted for publication in the IEEE Access.

Abstract: Although attention mechanisms have been applied to a variety of deep learning models and have been shown to improve the prediction performance, it has been reported to be vulnerable to perturbations to the mechanism. To overcome the vulnerability to perturbations in the mechanism, we are inspired by adversarial training (AT), which is a powerful regularization technique for enhancing the robustness of the models. In this paper, we propose a general training technique for natural language processing tasks, including AT for attention (Attention AT) and more interpretable AT for attention (Attention iAT). The proposed techniques improved the prediction performance and the model interpretability by exploiting the mechanisms with AT. In particular, Attention iAT boosts those advantages by introducing adversarial perturbation, which enhances the difference in the attention of the sentences. Evaluation experiments with ten open datasets revealed that AT for attention mechanisms, especially Attention iAT, demonstrated (1) the best performance in nine out of ten tasks and (2) more interpretable attention (i.e., the resulting attention correlated more strongly with gradient-based word importance) for all tasks. Additionally, the proposed techniques are (3) much less dependent on perturbation size in AT.

Install and Run the experiments

Python 3.7 Code style: black Powered by AllenNLP

Install requirements

shell pip install -U pip poetry setuptools poetry install

Prepare for spaCy

shell python -m spacy download en

Prepare dataset for the experiments

  • for all dataset

shell allennlp make-dataset all

  • for specific dataset (e.g., SST)

shell allennlp make-dataset sst

Run training models

for binary classification (BC) tasks ```shell # for SST CUDA_VISIBLE_DEVICES=0 GPU=0 allennlp train \ config/sst/train.jsonnet \ -s output/sst/weighted # for Newsgroups CUDA_VISIBLE_DEVICES=0 GPU=0 allennlp train \ config/newsgroups/train.jsonnet \ -s output/newsgroups/weighted # for IMDB CUDA_VISIBLE_DEVICES=0 GPU=0 allennlp train \ config/imdb/train.jsonnet \ -s output/imdb/weighted # for AGNews CUDA_VISIBLE_DEVICES=0 GPU=0 allennlp train \ config/ag_news/train.jsonnet \ -s output/ag_news/weighted ```
for question answering (QA) tasks ```shell # For CNN CUDA_VISIBLE_DEVICES=0 GPU=0 allennlp train \ config/cnn/train.jsonnet \ -s output/cnn/vanilla ```
for natural language inference (NLI) tasks ```shell # For SNLI $ CUDA_VISIBLE_DEVICES=9 GPU=0 allennlp train \ config/snli/train.jsonnet \ -s output/snli/vanilla ```

Citation

If you find this code or idea useful, please cite it as below.

bibtex @article{kitada2020attention, title = {Attention Meets Perturbations: Robust and Interpretable Attention with Adversarial Training}, author = {Shunsuke Kitada and Hitoshi Iyatomi}, journal = {IEEE Access}, year={2021}, volume={9}, number={}, pages={92974-92985}, doi={10.1109/ACCESS.2021.3093456} }

Reference

  • S. Kitada and H. Iyatomi, "Attention Meets Perturbations: Robust and Interpretable Attention With Adversarial Training," in IEEE Access, vol. 9, pp. 92974-92985, 2021, doi: 10.1109/ACCESS.2021.3093456.

Owner

  • Name: Shunsuke KITADA
  • Login: shunk031
  • Kind: user
  • Location: Japan / Tokyo
  • Company: Hosei Univ. @IyatomiLab

Ph.D student working on deep learning-based natural language processing, computer vision, computational advertising.

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you find this code or idea useful, please cite it as below."
authors:
- family-names: "Kitada"
  given-names: "Shunsuke"
  orcid: "https://orcid.org/0000-0002-3330-8779"
- family-names: "Iyatomi"
  given-names: "Hitoshi"
  orcid: "https://orcid.org/0000-0003-4108-4178"
title: "Attention Meets Perturbations: Robust and Interpretable Attention With Adversarial Training"
version: 1.0.0
doi: 10.1109/ACCESS.2021.3093456
date-released: 2021-06-29
url: "https://github.com/shunk031/attention-meets-perturbation"

GitHub Events

Total
  • Watch event: 1
Last Year
  • Watch event: 1

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 0
  • Total pull requests: 12
  • Average time to close issues: N/A
  • Average time to close pull requests: 21 minutes
  • Total issue authors: 0
  • Total pull request authors: 1
  • Average comments per issue: 0
  • Average comments per pull request: 0.0
  • Merged pull requests: 12
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
  • shunk031 (12)
Top Labels
Issue Labels
Pull Request Labels

Dependencies

docs/go.mod go
  • github.com/wowchemy/wowchemy-hugo-modules/wowchemy v0.0.0-20200902195927-86da39719ccd
docs/go.sum go
  • github.com/wowchemy/wowchemy-hugo-modules/wowchemy v0.0.0-20200902195927-86da39719ccd
poetry.lock pypi
  • appnope 0.1.0 develop
  • backcall 0.2.0 develop
  • black 20.8b1 develop
  • decorator 4.4.2 develop
  • flake8 3.8.3 develop
  • ipython 7.18.1 develop
  • ipython-genutils 0.2.0 develop
  • isort 5.5.1 develop
  • jedi 0.17.2 develop
  • mccabe 0.6.1 develop
  • mypy-extensions 0.4.3 develop
  • parso 0.7.1 develop
  • pathspec 0.8.0 develop
  • pexpect 4.8.0 develop
  • pickleshare 0.7.5 develop
  • prompt-toolkit 3.0.7 develop
  • ptyprocess 0.6.0 develop
  • pycodestyle 2.6.0 develop
  • pyflakes 2.2.0 develop
  • traitlets 5.0.4 develop
  • typed-ast 1.4.1 develop
  • typing-extensions 3.7.4.3 develop
  • wcwidth 0.2.5 develop
  • allennlp 1.1.0
  • appdirs 1.4.4
  • atomicwrites 1.4.0
  • attrs 20.2.0
  • blessings 1.7
  • blis 0.4.1
  • boto3 1.14.58
  • botocore 1.17.58
  • catalogue 1.0.0
  • certifi 2020.6.20
  • chardet 3.0.4
  • click 7.1.2
  • codecov 2.1.9
  • colorama 0.4.3
  • coloredlogs 14.0
  • colour-runner 0.1.1
  • coverage 5.2.1
  • cymem 2.0.3
  • deepdiff 5.0.2
  • distlib 0.3.1
  • docutils 0.15.2
  • filelock 3.0.12
  • future 0.18.2
  • h5py 2.10.0
  • humanfriendly 8.2
  • idna 2.10
  • importlib-metadata 1.7.0
  • iniconfig 1.0.1
  • jmespath 0.10.0
  • joblib 0.16.0
  • jsonnet 0.16.0
  • jsonpickle 1.4.1
  • more-itertools 8.5.0
  • murmurhash 1.0.2
  • nltk 3.5
  • numpy 1.19.1
  • ordered-set 4.0.2
  • overrides 3.1.0
  • packaging 20.4
  • pandas 1.1.2
  • plac 1.1.3
  • pluggy 0.13.1
  • preshed 3.0.2
  • protobuf 3.13.0
  • py 1.9.0
  • pygments 2.6.1
  • pyparsing 2.4.7
  • pyreadline 2.1
  • pytest 6.0.1
  • python-dateutil 2.8.1
  • pytz 2020.1
  • regex 2020.7.14
  • requests 2.24.0
  • rootpath 0.1.1
  • s3transfer 0.3.3
  • sacremoses 0.0.43
  • scikit-learn 0.23.2
  • scipy 1.5.2
  • sentencepiece 0.1.91
  • six 1.15.0
  • spacy 2.3.2
  • srsly 1.0.2
  • tensorboardx 2.1
  • termcolor 1.1.0
  • thinc 7.4.1
  • threadpoolctl 2.1.0
  • tokenizers 0.8.1rc1
  • toml 0.10.1
  • torch 1.6.0
  • tox 3.20.0
  • tqdm 4.48.2
  • transformers 3.0.2
  • urllib3 1.25.10
  • virtualenv 20.0.31
  • wasabi 0.8.0
  • zipp 3.1.0
pyproject.toml pypi
  • black ^20.8b1 develop
  • flake8 ^3.8.3 develop
  • ipython ^7.18.1 develop
  • isort ^5.5.1 develop
  • allennlp ^1.1.0
  • pandas ^1.1.2
  • python ^3.7
  • rootpath ^0.1.1