Skip to content

Latest commit

 

History

History
63 lines (25 loc) · 2.45 KB

uncertainty_multitask.md

File metadata and controls

63 lines (25 loc) · 2.45 KB

August 2019

tl;dr: Self-paced learning based on homoscedastic uncertainty.

Overall impression

The paper spent much math details deriving the formulation of mutitask loss function based on the idea of maximizing the Gaussian likelihood with himoscedastic uncertainty. However once implemented, the formulation is extremely straightforward and easy to implement.

Methods Learning Progress Signal hyperparameters
Uncertainty Weighting Homoscedastic Uncertainty No hyperparameters
GradNorm Training loss ratio 1 exponential weighting factor
Dynamic Task Prioritization KPI 1 focal loss scaling factor

Key ideas

  • Uncertainties (for details refer to uncertainty in bayesian DL)
    • Epistemic uncertainty: model uncertainty
    • Aleatoric uncertainty: data uncertainty
      • Data dependent (heteroscedastic) uncertainty
      • Task dependent (homoscedastic) uncertainty: does not depend on input data. It stays constant for all data but varies between tasks.
  • Modify each loss by uncertainty factor, $\sigma$. $$L \rightarrow \frac{1}{\sigma^2}L + \log\sigma $$ This formulation can be easily generalized to almost any loss function. There is a task-specific parameter that can be learned and dynamically updated throughout learning.
  • Instance segmentation is done in a way very similar to center net. Each

Technical details

  • Regress $\log \sigma^2$ instead of $\sigma^2$ directly. This exponential mapping allows to regress unbounded scalar values.

Notes