Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The time spent on the inference request process far exceeds the model inference time. How can I determine where this additional time is being consumed? #7152

Open
wfd2022 opened this issue Apr 24, 2024 · 0 comments

Comments

@wfd2022
Copy link

wfd2022 commented Apr 24, 2024

When I use curl localhost:10502/metrics to get the model's runtime status, I find that the total latency time is much longer than the inference time, as indicated by the red arrow in the image below. What is the difference between nv_inference_request_duration_us and nv_inference_compute_infer_duration_us, and which one represents the time used by the client to call the infer function? Moreover, the obtained metrics do not show where most of the time is spent other than inference. Is there any way to address this?

image

By the way, when I use shared memory for inference, the time spent on the inference request process is close to the model inference time. So I suspect it might be due to network latency, but I can't find any API or documentation that displays network latency.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

1 participant