Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
-
Updated
May 16, 2024 - Python
Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
RobustBench: a standardized adversarial robustness benchmark [NeurIPS'21 Benchmarks and Datasets Track]
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, and 2023)
EasyRobust: an Easy-to-use library for state-of-the-art Robust Computer Vision Research with PyTorch.
Square Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]
[TPAMI2022 & NeurIPS2020] Official implementation of Self-Adaptive Training
Lipschitz Neural Networks described in "Sorting Out Lipschitz Function Approximation" (ICML 2019).
Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs
[CVPR 2020] Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
Pytorch implementation of our NeurIPS'20 *Oral* paper "DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles".
Feature Scattering Adversarial Training (NeurIPS19)
Implementing the algorithm from our paper: "A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated Learning".
Unofficial implementation of the DeepMind papers "Uncovering the Limits of Adversarial Training against Norm-Bounded Adversarial Examples" & "Fixing Data Augmentation to Improve Adversarial Robustness" in PyTorch
Contact: Alexander Hartl, Maximilian Bachl, Fares Meghdouri. Explainability methods and Adversarial Robustness metrics for RNNs for Intrusion Detection Systems. Also contains code for "SparseIDS: Learning Packet Sampling with Reinforcement Learning" (branch "rl").
Contains notebooks for the PAR tutorial at CVPR 2021.
[ICLR 2022] "Patch-Fool: Are Vision Transformers Always Robust Against Adversarial Perturbations?" by Yonggan Fu, Shunyao Zhang, Shang Wu, Cheng Wan, Yingyan Lin
[CVPR 2022] "Aug-NeRF: Training Stronger Neural Radiance Fields with Triple-Level Physically-Grounded Augmentations" by Tianlong Chen*, Peihao Wang*, Zhiwen Fan, Zhangyang Wang
[ICML 2021] This is the official github repo for training L_inf dist nets with high certified accuracy.
Add a description, image, and links to the adversarial-robustness topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-robustness topic, visit your repo's landing page and select "manage topics."