This project Implements the paper “Causal Fair Metric: Bridging Causality, Individual Fairness, and Adversarial Robustness” using the Python language.
-
Updated
Oct 30, 2023 - Python
This project Implements the paper “Causal Fair Metric: Bridging Causality, Individual Fairness, and Adversarial Robustness” using the Python language.
Uses the simplex to propose a tighter boundary for the l1 perturbation of the convex activation function network, improving the effect of the CROWN algorithm.
Code for "Adversarially Robust Spiking Neural Networks Through Conversion" [TMLR 2024]
Official repository for the paper: "On Adversarial Training without Perturbing all Examples", Accepted at ICLR 2024
[Partial] RADLER: (adversarially) Robust Adversarial Distributional LEaRner
'Robust Deepfake Detection' project for the Deep Learning course at ETH Zurich, 2021. Authors (alphabetic): David Kamm, Nicolas Muntwyler, Alexander Timans, Moritz Vandenhirtz.
Imbalanced Gradients: A New Cause of Overestimated Adversarial Robustness. (MD attacks)
[TMLR 22] "Queried Unlabeled Data Improves and Robustifies Class- Incremental Learning" by Tianlong Chen, Sijia Liu, Shiyu Chang, Lisa Animi, Zhangyang Wang
An extension of the PuVAE architecture for adversarial robustness
Characterizing Data Point Vulnerability via Average-Case Robustness, UAI 2024
[ICLR 2022] Boosting Randomized Smoothing with Variance Reduced Classifiers
[ICML 2022] "Data-Efficient Double-Win Lottery Tickets from Robust Pre-training" by Tianlong Chen, Zhenyu Zhang, Sijia Liu, Yang Zhang, Shiyu Chang, Zhangyang Wang
Random Projections for improved Adversarial Robustness
This repo implements our paper, "Fault-Tolerant Federated Reinforcement Learning with Theoretical Guarantee", which has been accepted at NuerIPS 2021.
[ECCV 2020 AROW Workshop] A Deep Dive into Adversarial Robustness in Zero-Shot Learning
[SRML@ICLR 2022] Robust and Accurate -- Compositional Architectures for Randomized Smoothing
implementation for "Overcoming Adversarial Attacks for HITL Applications"
Nearest Category Generalization
[Pattern Recognition 2024] Towards Robust Neural Networks via Orthogonal Diversity"
[SANER 2023] "CLAWSAT: Towards Both Robust and Accurate Code Models" by Jinghan Jia*, Shashank Srikant*, Tamara Mitrovska, Chuang Gan, Shiyu Chang, Sijia Liu, Una-May O'Reilly
Add a description, image, and links to the adversarial-robustness topic page so that developers can more easily learn about it.
To associate your repository with the adversarial-robustness topic, visit your repo's landing page and select "manage topics."