Skip to content
@AI-secure

AI Secure

UIUC Secure Learning Lab

Popular repositories

  1. DecodingTrust DecodingTrust Public

    A Comprehensive Assessment of Trustworthiness in GPT Models

    Python 213 48

  2. DBA DBA Public

    DBA: Distributed Backdoor Attacks against Federated Learning (ICLR 2020)

    Python 166 45

  3. Certified-Robustness-SoK-Oldver Certified-Robustness-SoK-Oldver Public

    This repo keeps track of popular provable training and verification approaches towards robust neural networks, including leaderboards on popular datasets and paper categorization.

    99 10

  4. VeriGauge VeriGauge Public

    A united toolbox for running major robustness verification approaches for DNNs. [S&P 2023]

    C 87 6

  5. InfoBERT InfoBERT Public

    [ICLR 2021] "InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective" by Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing Liu

    Python 82 7

  6. CRFL CRFL Public

    CRFL: Certifiably Robust Federated Learning against Backdoor Attacks (ICML 2021)

    Python 66 15

Repositories

Showing 10 of 51 repositories
  • DecodingTrust Public

    A Comprehensive Assessment of Trustworthiness in GPT Models

    Python 213 CC-BY-SA-4.0 48 9 1 Updated May 8, 2024
  • VFL-ADMM Public

    Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM (SaTML 2024)

    0 Apache-2.0 0 0 0 Updated Mar 20, 2024
  • aug-pe Public

    Differentially Private Synthetic Data via Foundation Model APIs 2: Text

    Python 16 Apache-2.0 2 1 0 Updated Mar 14, 2024
  • DPFL-Robustness Public

    [CCS 2023] Unraveling the Connections between Privacy and Certified Robustness in Federated Learning Against Poisoning Attacks

    Python 5 0 0 0 Updated Feb 15, 2024
  • hf-blog Public Forked from huggingface/blog

    Public repo for HF blog posts

    Jupyter Notebook 0 637 0 0 Updated Jan 26, 2024
  • helm Public Forked from stanford-crfm/helm

    Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09110).

    Python 0 Apache-2.0 229 0 1 Updated Jan 7, 2024
  • Python 0 0 0 0 Updated Dec 25, 2023
  • TextGuard Public

    TextGuard: Provable Defense against Backdoor Attacks on Text Classification

    Python 5 0 0 0 Updated Nov 7, 2023
  • InfoBERT Public

    [ICLR 2021] "InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective" by Boxin Wang, Shuohang Wang, Yu Cheng, Zhe Gan, Ruoxi Jia, Bo Li, Jingjing Liu

    Python 82 7 0 0 Updated Oct 25, 2023
  • FedGame Public

    Official implementation for paper "FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning" (NeurIPS 2023).

    2 0 0 0 Updated Oct 12, 2023

Top languages

Loading…

Most used topics

Loading…