Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent inference results from clearml-serving #35

Open
Muscle-Oliver opened this issue Sep 23, 2022 · 1 comment
Open

Inconsistent inference results from clearml-serving #35

Muscle-Oliver opened this issue Sep 23, 2022 · 1 comment

Comments

@Muscle-Oliver
Copy link

Hello,
I deployed a model using clearml-serving, but it generate inconsistent results across same HTTP requests.

To recreate:

  1. I deployed a self-hosted clearml server in my local kubernetes (from docker image allegroai/clearml:1.4.0).
  2. Reused the pytorch MNIST example from https://github.com/allegroai/clearml-serving/tree/main/examples/pytorch.
  3. Went through the model training process.
  4. Installed clearml-serving with helm (helm repo: NAME allegroai/clearml-serving, CHART VERSION 0.4.1, APP VERSION 0.9.0).
  5. Deployed the MNIST model to a serving endpoint.
  6. Tested the endpoint "http://ip:port/serve/test_model_pytorch" using POSTMAN

Everything goes well as the readme.md from https://github.com/allegroai/clearml-serving/tree/main/examples/pytorch instructed.
But mysteriously, the HTTP responses are not consistent! (The MNIST model occasionally returns different "digits" from the same input image)

I'm quite confused here, and have no idea if any random process happens during the model inference.
Thanks for any help!

@thepycoder
Copy link
Contributor

Hi @Muscle-Oliver , sorry for the late reply.

Do you have an update on this, or has it not changed since?
Do you think you could try running the model just in python, not in clearml serving and seeing if the same thing happens there?

ClearML Serving uses Triton on the backend, so it could be that it is an issue with Triton. Can you try the newest version of ClearML Serving (1.2 was just released) and see if the issue persists?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants