Random Projections for improved Adversarial Robustness
-
Updated
Apr 11, 2020 - Jupyter Notebook
Random Projections for improved Adversarial Robustness
Lipschitz Neural Networks described in "Sorting Out Lipschitz Function Approximation" (ICML 2019).
[Partial] RADLER: (adversarially) Robust Adversarial Distributional LEaRner
Square Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]
Imbalanced Gradients: A New Cause of Overestimated Adversarial Robustness. (MD attacks)
📕 Adversarial Attacks and Defenses for Image-Based Recommendation Systems using Deep Neural Networks.
Contact: Alexander Hartl, Maximilian Bachl, Fares Meghdouri. Explainability methods and Adversarial Robustness metrics for RNNs for Intrusion Detection Systems. Also contains code for "SparseIDS: Learning Packet Sampling with Reinforcement Learning" (branch "rl").
Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs
Connecting Interpretability and Robustness in Decision Trees through Separation
Feature Scattering Adversarial Training (NeurIPS19)
[NeurIPS2020] The official repository of "Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks".
LAFEAT: Piercing Through Adversarial Defenses with Latent Features (CVPR 2021 Oral)
Contains notebooks for the PAR tutorial at CVPR 2021.
[TPAMI2022 & NeurIPS2020] Official implementation of Self-Adaptive Training
[ICLR 2020] ”Triple Wins: Boosting Accuracy, Robustness and Efficiency Together by Enabling Input-Adaptive Inference“
[ICML 2021 Long Talk] "Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm" by Mingkang Zhu, Tianlong Chen, Zhangyang Wang
[ICLR 2021] "Robust Overfitting may be mitigated by properly learned smoothening" by Tianlong Chen*, Zhenyu Zhang*, Sijia Liu, Shiyu Chang, Zhangyang Wang
[CVPR 2020] Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning
[ECCV 2020 AROW Workshop] A Deep Dive into Adversarial Robustness in Zero-Shot Learning
Add a description, image, and links to the adversarial-robustness topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-robustness topic, visit your repo's landing page and select "manage topics."