You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The time spent on the inference request process far exceeds the model inference time. How can I determine where this additional time is being consumed?
#7152
Open
wfd2022 opened this issue
Apr 24, 2024
· 0 comments
When I use curl localhost:10502/metrics to get the model's runtime status, I find that the total latency time is much longer than the inference time, as indicated by the red arrow in the image below. What is the difference between nv_inference_request_duration_us and nv_inference_compute_infer_duration_us, and which one represents the time used by the client to call the infer function? Moreover, the obtained metrics do not show where most of the time is spent other than inference. Is there any way to address this?
By the way, when I use shared memory for inference, the time spent on the inference request process is close to the model inference time. So I suspect it might be due to network latency, but I can't find any API or documentation that displays network latency.
The text was updated successfully, but these errors were encountered:
When I use curl localhost:10502/metrics to get the model's runtime status, I find that the total latency time is much longer than the inference time, as indicated by the red arrow in the image below. What is the difference between nv_inference_request_duration_us and nv_inference_compute_infer_duration_us, and which one represents the time used by the client to call the infer function? Moreover, the obtained metrics do not show where most of the time is spent other than inference. Is there any way to address this?
By the way, when I use shared memory for inference, the time spent on the inference request process is close to the model inference time. So I suspect it might be due to network latency, but I can't find any API or documentation that displays network latency.
The text was updated successfully, but these errors were encountered: