Skip to content
This repository has been archived by the owner on Mar 21, 2024. It is now read-only.

Inaccaurate documentation for evaluating against pre-trained models #840

Open
1 task done
peterhessey opened this issue Jan 23, 2023 · 1 comment
Open
1 task done
Assignees
Labels
documentation Improvements or additions to documentation

Comments

@peterhessey
Copy link
Contributor

peterhessey commented Jan 23, 2023

Is there an existing issue for this?

  • I have searched the existing issues

Issue summary

Some documentation is lacking / inaccurate for evaluating against pre-trained models.

What documentation should be provided?

  1. The evaluation code uses a config based on the -–model parameter, not the config provided by model_id. This should be clarified.
  2. The datasets that you use for evaluation needs to have at least 3 subjects to be able to have at least 1 for training, 1 for validation and 1 for testing. This is quite cumbersome because our users probably want to evaluate on all their data. Documentation should be clearer on how to do this (inference service / other workarounds?).
  3. Clarify that the validation the evaluation dataset needs to have the same structures as the model, or the checks will fail.

example command that currently works:

python ./InnerEye/ML/runner.py --azure_dataset_id <dataset_id> --model <model_class_name> --model_id <azure_model_id>:<version> --azureml --train False --restrict_subjects=1,1,1 --check_exclusive=False

AB#8800

@peterhessey peterhessey added the documentation Improvements or additions to documentation label Jan 23, 2023
@peterhessey peterhessey self-assigned this Jan 23, 2023
@peterhessey
Copy link
Contributor Author

peterhessey commented Mar 22, 2023

Additionally, there is a lack of documentation on how to run inference locally, as per this discussion: #842

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

1 participant