Skip to content
This repository has been archived by the owner on Mar 21, 2024. It is now read-only.

Do I have to deploy a model before doing inference? #670

Answered by ant0nsc
furtheraway asked this question in Q&A
Discussion options

You must be logged in to vote

There is no need to deploy the model manually. If you have run training successfully in AzureML, the job overview page ("Details") will have an entry for "Registered model" - click on that link.
About the two environment variables:

  • CUSTOMCONNSTR_AZUREML_SERVICE_PRINCIPAL_SECRET is the password for an Azure Service Principal (think of that as a machine account) that can talk to your AzureML workspace. You need to create this service principal first, and give it access to your AzureML workspace.
  • CUSTOMCONNSTR_API_AUTH_SECRET is a random string that you can choose, you can use a GUID for example. This is the magic secret that you put into the API call to authenticate yourself to the inferen…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@furtheraway
Comment options

Answer selected by furtheraway
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants