Skip to content
View cawandmilk's full-sized avatar
  • 19:09 (UTC +09:00)

Highlights

  • Pro
Block or Report

Block or report cawandmilk

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
cawandmilk/README.md

Hi there 👋

All histories about me can be found in CV paper.

News

2024/02 Our paper, An analysis of the Correlation between Generated Sentences of Masked Language Models and Original Training Data Distribution, an oral presentation at KCS 2023, has been published online. Congratulations!

2024/02: We have submitted our paper, Amplifying Training Data Exposure through Fine-Tuning with Pseudo-Labeled Memberships, to an anonymous AI conference and is currently under review.

2024/01: We have submitted our paper, Adversarial Feature Alignment: Balancing Robustness and Accuracy in Deep Learning via Adversarial Training, to an anonymous security conference and is currently under review.

2023/11: We're pleased to inform you that our paper CIA-based Analysis of LLM Alignment in Information Security has been selected for an oral presentation at CISC-W'23. Congratulations!

Pinned

  1. seclab-yonsei/amplifying-exposure seclab-yonsei/amplifying-exposure Public

    Python 2

  2. seclab-yonsei/mia-ko-lm seclab-yonsei/mia-ko-lm Public

    Performing membership inference attack (MIA) against Korean language models (LMs).

    Python 6 1

  3. evasion_attack evasion_attack Public

    Training, inference, and evaluate of the speaker identification and verification model are carried out, and evasion attacks (FGSM, PGD) are performed.

    Python 1

  4. gas gas Public archive

    Python 4