Skip to content

Interpretable AI with Safeguard AI (paper study, implement-code review)

Notifications You must be signed in to change notification settings

Jihunlee326/InterpretableAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Interpretable AI

Interpretable AI with Safeguard AI

MODULABS

  • Week1 : A BASELINE FOR DETECTING MISCLASSIFIED AND OUT-OF-DISTRIBUTION EXAMPLES IN NEURAL NETWORKS
  • Week2 : ENHANCING THE RELIABILITY OF OUT-OF-DISTRIBUTION IMAGE DETECTION IN NEURAL NETWORKS
  • Week3 : Safeguard AI & Surprise Based Learning(SBL) Seminar
  • Week4 :
    1. TRAINING CONFIDENCE-CALIBRATED CLASSIFIERS FOR DETECTING OUT-OF-DISTRIBUTION SAMPLES
    2. Uncertainty-Aware Learning from Demonstration Using Mixture Density Networks with Sampling-Free Variance Modeling
  • Week5 :
    1. PREDICTION UNDER UNCERTAINTY WITH ERRORENCODING NETWORKS
    2. What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
  • Week6 : Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

About

Interpretable AI with Safeguard AI (paper study, implement-code review)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published