Skip to content
This repository has been archived by the owner on Mar 21, 2024. It is now read-only.

How to make inference on InnerEye-DeepLearning models deployed locally? #819

Answered by peterhessey
Med-Rokaimi asked this question in Q&A
Discussion options

You must be logged in to vote

Hi! Apologies for the delayed response on this.

The submit_for_inference.py script is not for running inference locally, it is only for submitting inference jobs to AzureML, hence the prompting for subscription_id and resource_group settings.

To run inference locally, see the "Testing an existing model" docs page. This shows you how to use the --no-train and --local_weights_path flags to run your model locally in testing mode.

Note that this will run inference on all images in your test set. You can further control which / how many images are passed through the model through the use of the --restrict_subjects flag, as in the debugging and monitoring docs page.

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@Med-Rokaimi
Comment options

@peterhessey
Comment options

Answer selected by peterhessey
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants