Skip to content

Latest commit

 

History

History
29 lines (27 loc) · 1.45 KB

TODO.md

File metadata and controls

29 lines (27 loc) · 1.45 KB

Next week

  1. Chase RDS
  2. Costings (Munich)
  3. On the supercomputer
  4. Hyperparamater optimisation and evaluation of all these options
  5. Plot curves of sensitivity and specificity
  6. How to save and compare experiments. Make a plot to summarise experiments.
  7. Loading in histology. K
  8. Saliency way to evaluate impact of features.
  9. Architectures, dropout and losses - run for both types of curves. No subsampling of training dataset K
  10. Cross validating as default - building nested experiments>folds H XX
  11. Evaluation scripts to summarise results K
  12. Save out logs H
  13. Kambam board S
  14. Title plots. K
  15. Summarising scripts - Ditch binary curve. See optimisation curves across folds - do first. Aggregate perfomance stats across folds. Consider reevaluating at shared optimal threshold. K Raincloud plots - each evaluation statistic across all folds. Define which experiments you want to compare. Initially whole experiments. Sub experiments. K Per subject plots across experiments coloured lines. And a matrix of overlap between experiments.

Further ideas:

  1. Montecarlo dropout for uncertainty estimation
  2. Ensemble models eg with 3 loss functions or different architectures. Need experiment to check if false positives change.
  3. Prior with lesion map
  4. Saliency way to evaluate impact of features.
  5. Applying combat to new site to align data
  6. How we manage new patients from new sites. Versioning of the dataset and any trained classifier.