You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
• RTX 3060 • Docker nvcr.io/nvidia/tritonserver:24.04-py3-sdk • Cannot run model-analyzer on a model
I am currently profiling several models with model-analyzer.
I can't manage to do this with one of my models and I'd like to have more information about the error encountered ...
Here is the error message:
[Model Analyzer] Initializing GPUDevice handles
[Model Analyzer] Using GPU 0 NVIDIA GeForce RTX 3060 Laptop GPU with UUID GPU-87703e76-5ffe-5cde-d056-3c70fa64251a
[Model Analyzer] Starting a Triton Server using docker
[Model Analyzer] Loaded checkpoint from file /workspace/checkpoints/2.ckpt
[Model Analyzer] GPU devices match checkpoint - skipping server metric acquisition
[Model Analyzer] Starting a Triton Server using docker
[Model Analyzer]
[Model Analyzer] Starting automatic brute search
[Model Analyzer]
[Model Analyzer] Creating model config: age_config_default
[Model Analyzer]
[Model Analyzer] Profiling age_config_default: client batch size=1, concurrency=1
[Model Analyzer] Running perf_analyzer failed with exit status 99:
error: Failed to init manager inputs: input input contains dynamic shape, provide shapes to send along with the request
[Model Analyzer] Saved checkpoint to /workspace/checkpoints/3.ckpt
[Model Analyzer] Creating model config: age_config_0
[Model Analyzer] Setting instance_group to [{'count': 1, 'kind': 'KIND_GPU'}]
[Model Analyzer]
[Model Analyzer] Profiling age_config_0: client batch size=1, concurrency=1
[Model Analyzer] Running perf_analyzer failed with exit status 99:
error: Failed to init manager inputs: input input contains dynamic shape, provide shapes to send along with the request
[Model Analyzer] No changes made to analyzer data, no checkpoint saved.
Traceback (most recent call last):
File "/usr/local/bin/model-analyzer", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/model_analyzer/entrypoint.py", line 278, in main
analyzer.profile(
File "/usr/local/lib/python3.10/dist-packages/model_analyzer/analyzer.py", line 124, in profile
self._profile_models()
File "/usr/local/lib/python3.10/dist-packages/model_analyzer/analyzer.py", line 242, in _profile_models
self._model_manager.run_models(models=[model])
File "/usr/local/lib/python3.10/dist-packages/model_analyzer/model_manager.py", line 145, in run_models
self._stop_ma_if_no_valid_measurement_threshold_reached()
File "/usr/local/lib/python3.10/dist-packages/model_analyzer/model_manager.py", line 239, in _stop_ma_if_no_valid_measurement_threshold_reached
raise TritonModelAnalyzerException(
model_analyzer.model_analyzer_exceptions.TritonModelAnalyzerException: The first 2 attempts to acquire measurements have failed. Please examine the Tritonserver/PA error logs to determine what has gone wrong.
Start Triton container docker run -it --gpus all -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start --net=host nvcr.io/nvidia/tritonserver:24.04-py3-sdk
Add this folder in the model repository:
age.zip|attachment (21.3 MB)
Run model analyis: model-analyzer profile --model-repository /YOUR_PATH/examples/quick-start/ --profile-models age --triton-launch-mode=docker --output-model-repository-path /opt/output_dir --export-path profile_results
Thanks
The text was updated successfully, but these errors were encountered:
• RTX 3060
• Docker nvcr.io/nvidia/tritonserver:24.04-py3-sdk
• Cannot run model-analyzer on a model
I am currently profiling several models with
model-analyzer
.I can't manage to do this with one of my models and I'd like to have more information about the error encountered ...
Here is the error message:
Here ares the steps to reproduce:
Clone that repo and go to that directory:
https://github.com/triton-inference-server/model_analyzer
Start Triton container
docker run -it --gpus all -v /var/run/docker.sock:/var/run/docker.sock -v $(pwd)/examples/quick-start:$(pwd)/examples/quick-start --net=host nvcr.io/nvidia/tritonserver:24.04-py3-sdk
Add this folder in the model repository:
age.zip|attachment (21.3 MB)
Run model analyis:
model-analyzer profile --model-repository /YOUR_PATH/examples/quick-start/ --profile-models age --triton-launch-mode=docker --output-model-repository-path /opt/output_dir --export-path profile_results
Thanks
The text was updated successfully, but these errors were encountered: