Skip to content
View HanxunH's full-sized avatar
💭
I may be slow to respond.
💭
I may be slow to respond.
  • The University of Melbourne
  • Melbourne / Beijing
  • 01:48 (UTC +10:00)

Highlights

  • Pro
Block or Report

Block or report HanxunH

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
HanxunH/README.md

Hi there, I am Hanxun Huang (Curtis) 👋

I am a research fellow at the School of Computing and Information Systems, The University of Melbourne. I completed my Ph.D. at the University of Melbourne, supervised by Prof. James Bailey, Dr. Xingjun Ma and Dr.Sarah Erfani. Prior to my PhD, I completed my Master’s at The University of Melbourne and Bachelor’s study at Purdue University.

🔭 My research mainly focus on:

  • Adversarial Attacks and Defenses
  • Robust Machine Learning
  • Trustworthy ML

Contact me 📧

Cheers 🍻

Pinned

  1. LDReg LDReg Public

    [ICLR2024] LDReg: Local Dimensionality Regularized Self-Supervised Learning

    Python 8

  2. MDAttack MDAttack Public

    [Machine Learning 2023] Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness

    Python 17

  3. CognitiveDistillation CognitiveDistillation Public

    [ICLR2023] Distilling Cognitive Backdoor Patterns within an Image

    Python 29 2

  4. RobustWRN RobustWRN Public

    [NeurIPS2021] Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks

    Python 33 1

  5. Unlearnable-Examples Unlearnable-Examples Public

    [ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable

    Python 141 14

  6. Active-Passive-Losses Active-Passive-Losses Public

    [ICML2020] Normalized Loss Functions for Deep Learning with Noisy Labels

    Python 126 28