You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description
There is an abnormal system memory usage while enabling GPU metrics.
enable GPU metrics:
command: tritonserver --model-repository=/models after a long time waiting
Triton Server Started successfully, and it uses about 52GB of system memory!
and disable GPU metrics:
command: tritonserver --model-repository=/models --allow-gpu-metrics=false triton server immediately started.
Now it just uses a little system memory.
I think this problem may be related to the GPU driver version or CUDA version, rather than the Triton version. It seems that there are some problems with the coordination between Triton and the latest version of GPU drivers and CUDA
Triton Information
Triton Version:install from docker images:nvcr.io/nvidia/tritonserver:24.03-py3 (seems 24.02 have same problem, other verison not tested.)
My GPU: NVIDIA GeForce RTX 4060 Ti
Driver Version: 550.54.15
CUDA Version: 12.4
To Reproduce
Just 'docker pull nvcr.io/nvidia/tritonserver:24.03-py3'
And start a container: docker run --gpus all -it --shm-size=256m -p8000:8000 -p8001:8001 -p8002:8002 -v /your/dir/:/models
This problem seems to be unrelated to the type of model you are using, at least not to onnxruntime backend and tensorrt backend.
entry tritonserver --model-repository=/models Press Enter and monitor the memory resource usage
The text was updated successfully, but these errors were encountered:
To add, this problem did not occur when using RTX3090, Driver Version: 535.x(may be not this version, last test with RTX3090 was a long time ago)
if you execute the 'nvidia-smi' command inside the container, it will take a long time to read hardware information, even stuck. Instead of immediately obtaining GPU information
Description
There is an abnormal system memory usage while enabling GPU metrics.
enable GPU metrics:
command: tritonserver --model-repository=/models
after a long time waiting
Triton Server Started successfully, and it uses about 52GB of system memory!
and disable GPU metrics:
command: tritonserver --model-repository=/models --allow-gpu-metrics=false
triton server immediately started.
Now it just uses a little system memory.
I think this problem may be related to the GPU driver version or CUDA version, rather than the Triton version. It seems that there are some problems with the coordination between Triton and the latest version of GPU drivers and CUDA
Triton Information
Triton Version:install from docker images:nvcr.io/nvidia/tritonserver:24.03-py3 (seems 24.02 have same problem, other verison not tested.)
My GPU: NVIDIA GeForce RTX 4060 Ti
Driver Version: 550.54.15
CUDA Version: 12.4
To Reproduce
Just 'docker pull nvcr.io/nvidia/tritonserver:24.03-py3'
And start a container: docker run --gpus all -it --shm-size=256m -p8000:8000 -p8001:8001 -p8002:8002 -v /your/dir/:/models
This problem seems to be unrelated to the type of model you are using, at least not to onnxruntime backend and tensorrt backend.
entry tritonserver --model-repository=/models Press Enter and monitor the memory resource usage
The text was updated successfully, but these errors were encountered: