👀🛡️ Code for the paper “Carefully Blending Adversarial Training and Purification Improves Adversarial Robustness” by Emanuele Ballarin, Alessio Ansuini and Luca Bortolussi (2024)
-
Updated
May 24, 2024 - Python
👀🛡️ Code for the paper “Carefully Blending Adversarial Training and Purification Improves Adversarial Robustness” by Emanuele Ballarin, Alessio Ansuini and Luca Bortolussi (2024)
alpha-beta-CROWN: An Efficient, Scalable and GPU Accelerated Neural Network Verifier (winner of VNN-COMP 2021, 2022, and 2023)
Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
The all-in-one tool for comprehensive experimentation with adversarial attacks on image recognition.
Official repository for the paper: "On Adversarial Training without Perturbing all Examples", Accepted at ICLR 2024
Characterizing Data Point Vulnerability via Average-Case Robustness, UAI 2024
Uses the simplex to propose a tighter boundary for the l1 perturbation of the convex activation function network, improving the effect of the CROWN algorithm.
RobustBench: a standardized adversarial robustness benchmark [NeurIPS'21 Benchmarks and Datasets Track]
Code for "Adversarially Robust Spiking Neural Networks Through Conversion" [TMLR 2024]
EasyRobust: an Easy-to-use library for state-of-the-art Robust Computer Vision Research with PyTorch.
Revisiting Residual Networks for Adversarial Robustness: An Architectural Perspective
[ICML 2023] "NeRFool: Uncovering the Vulnerability of Generalizable Neural Radiance Fields against Adversarial Perturbations" by Yonggan Fu, Ye Yuan, Souvik Kundu, Shang Wu, Shunyao Zhang, Yingyan (Celine) Lin
MixedNUTS: Training-Free Accuracy-Robustness Balance via Nonlinearly Mixed Classifiers
Adversarial Attack and Defense in Deep Ranking, T-PAMI, 2024
Implementation of the paper "Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing".
[Pattern Recognition 2024] Towards Robust Neural Networks via Orthogonal Diversity"
Decoupled Kullback-Leibler Divergence Loss (DKL)
PyTorch implementation of adversarial training and defenses [Fantastic Robustness Measures: The Secrets of Robust Generalization, NeurIPS 2023].
Extending Sparse Dictionary Learning Methods for Adversarial Robustness
This project Implements the paper “Causal Fair Metric: Bridging Causality, Individual Fairness, and Adversarial Robustness” using the Python language.
Add a description, image, and links to the adversarial-robustness topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-robustness topic, visit your repo's landing page and select "manage topics."