Foolbox Native
Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX - Published in JOSS (2020)
https://github.com/cv-stuttgart/pcfa
[ECCV 2022 Oral] Source code for "A Perturbation-Constrained Adversarial Attack for Evaluating the Robustness of Optical Flow"
AdvBox
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
https://github.com/cpwan/attack-adaptive-aggregation-in-federated-learning
This is the code for our paper `Robust Federated Learning with Attack-Adaptive Aggregation' accepted by FTL-IJCAI'21.
packet_captor_sakura
Research code for "Improving Meek With Adversarial Techniques"
adv-lib
Library containing PyTorch implementations of various adversarial attacks and resources
https://github.com/cgcl-codes/transferattacksurrogates
The official code of IEEE S&P 2024 paper "Why Does Little Robustness Help? A Further Step Towards Understanding Adversarial Transferability". We study how to train surrogates model for boosting transfer attack.
cvpr22w_robustnessthroughthelens
Official repository of our submission "Adversarial Robustness through the Lens of Convolutional Filters" for the CVPR2022 Workshop "The Art of Robustness: Devil and Angel in Adversarial Machine Learning Workshop"
https://github.com/alfa-group/claw-sat
[SANER 2023] "CLAWSAT: Towards Both Robust and Accurate Code Models" by Jinghan Jia*, Shashank Srikant*, Tamara Mitrovska, Chuang Gan, Shiyu Chang, Sijia Liu, Una-May O'Reilly