Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Async Inference #88

Open
lkevinzc opened this issue Nov 27, 2021 · 1 comment
Open

[FEATURE] Async Inference #88

lkevinzc opened this issue Nov 27, 2021 · 1 comment
Labels
enhancement New feature or request

Comments

@lkevinzc
Copy link
Member

This can offer the client side the flexibility to do independent computation while waiting for the model inference result.

Maybe two API:

  1. /inference_async/put which returns an rid;
  2. /inference_async/get.

Python code should not be affected; need minor modification on Rust side.

@lkevinzc lkevinzc added the enhancement New feature or request label Nov 27, 2021
@lfxx
Copy link

lfxx commented Jun 21, 2023

Have we implemented this feature yet? I couldn’t find it in the example.This is very useful when the input is a sequence which been used under object track.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants