Interpretable AI with Safeguard AI
MODULABS
- Week1 : A BASELINE FOR DETECTING MISCLASSIFIED AND OUT-OF-DISTRIBUTION EXAMPLES IN NEURAL NETWORKS
- Week2 : ENHANCING THE RELIABILITY OF OUT-OF-DISTRIBUTION IMAGE DETECTION IN NEURAL NETWORKS
- Week3 : Safeguard AI & Surprise Based Learning(SBL) Seminar
- Week4 :
- TRAINING CONFIDENCE-CALIBRATED CLASSIFIERS FOR DETECTING OUT-OF-DISTRIBUTION SAMPLES
- Uncertainty-Aware Learning from Demonstration Using Mixture Density Networks with Sampling-Free Variance Modeling
- Week5 :
- PREDICTION UNDER UNCERTAINTY WITH ERRORENCODING NETWORKS
- What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
- Week6 : Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning