Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document default_pre_model_fn and default_model_warmup_fn #117

Open
l3ku opened this issue Oct 21, 2022 · 0 comments
Open

Document default_pre_model_fn and default_model_warmup_fn #117

l3ku opened this issue Oct 21, 2022 · 0 comments

Comments

@l3ku
Copy link

l3ku commented Oct 21, 2022

What did you find confusing? Please describe.
I was trying to investigate how I could load the model already when the inference endpoint starts, so that there wouldn't be any delay in the first request when the model needs to be loaded. I was able to find support for this by reading the code: https://github.com/aws/sagemaker-inference-toolkit/blob/master/src/sagemaker_inference/transformer.py#L200. There was nothing mentioned about this functionality in the README of this repo.

Describe how documentation can be improved
Document the default_pre_model_fn and default_model_warmup_fn into the README.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant