Skip to content

Releases: triton-inference-server/server

Release 2.35.0 corresponding to NGC container 23.06

30 Jun 01:39
46dbbe7
Compare
Choose a tag to compare

Important

tritonserver2.35.0-jetpack5.1.2.tgz release asset has been replaced with tritonserver2.35.0-jetpack5.1.2-update-1.tgz which includes the fix for CVE-2023-31036. See our security bulletin for more details.
This asset can be built from source using the r23.06-update-1-jp tag.

Triton Inference Server

The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.

New Features and Improvements

  • Support for KIND_MODEL instance type has been extended to the PyTorch backend.

  • The gRPC clients can now indicate whether they want to receive the flags associated with each response. This can help the clients to programmatically determine when all the responses for a given request have been received on the client side for decoupled models.

  • Added beta support for using Redis as a cache for inference requests.

  • The statistics extension now includes the memory usage of the loaded models This statistics is currently implemented only for TensorRT and ONNXRuntime backends.

  • Added support for batch inputs in ragged batching for PyTorch backend.

  • Added serial sequences mode for Perf Analyzer.

  • Refer to the 23.06 column of the Frameworks Support Matrix for container image versions on which the 23.06 inference server container is based.

Known Issues

  • The Fastertransfomer backend build only works with Triton 23.04 and older releases.

  • Tensorflow backend no longer supports TensorFlow version 1.

  • OpenVINO 2022.1 is used in the OpenVINO backend and the OpenVINO execution provider for the Onnxruntime Backend. OpenVINO 2022.1 is not officially supported on Ubuntu 22.04 and should be treated as beta.

  • Some systems which implement malloc() may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc and jemalloc are installed
    in the Triton container and can be used by specifying the library in LD_PRELOAD. We recommend experimenting with both tcmalloc and jemalloc to determine which one works better for your use case.

  • Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with --disable-auto-complete-config.

  • Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata
    about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273

  • Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA. The correct client wheel file can be pulled directly from the Arm SBSA SDK image and
    manually installed.

  • Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU. Refer to pytorch/pytorch#66930 for more information.

  • Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).

  • Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.

Client Libraries and Examples

Ubuntu 22.04 builds of the client libraries and examples are included in this release in the attached v2.35.0_ubuntu2204.clients.tar.gz file. The SDK is also available for as an Ubuntu 22.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.

For Windows, the client libraries and some examples are available in the attached tritonserver2.35.0-sdk-win.zip file.

Windows Support

A beta release of Triton for Windows is provided in the attached file:tritonserver2.35.0-win.zip. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:

  • HTTP/REST and GRPC endpoints are supported.

  • ONNX models are supported by the ONNXRuntime backend. The ONNXRuntime version is 1.15.0. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.

  • OpenVINO models are supported. The OpenVINO version is 2021.4.

  • Prometheus metrics endpoint is not supported.

  • System and CUDA shared memory are not supported.

To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:

  • CUDA 12.1.1

  • cuDNN 8.9.2.26

  • TensorRT 8.6.1.6

Jetson Jetpack Support

A release of Triton for JetPack is provided in the attached tar file: tritonserver2.35.0-jetpack5.1.2.tgz.

  • This release supports TensorFlow 2.12.0, TensorRT 8.5.2.2, Onnx Runtime 1.15.0, PyTorch 2.1.0a0+41361538, Python 3.8 and as well as ensembles.
  • ONNXRuntime backend does not support the OpenVino and TensorRT execution providers. The CUDA execution provider is in Beta.
  • System shared memory is supported on Jetson. CUDA shared memory is not supported.
  • GPU metrics, GCS storage, S3 storage and Azure storage are not supported.

The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples. For more information on how to install and use Triton on JetPack refer to jetson.md.

The wheel for the Python client library is present in the tar file and can be installed by running the following command:

python3 -m pip install --upgrade clients/python/tritonclient-2.35.0-py3-none-manylinux2014_aarch64.whl[all]

Release 2.34.0 corresponding to NGC container 23.05

31 May 00:18
cd37327
Compare
Choose a tag to compare

Triton Inference Server

The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.

What's New in 2.34.0

  • Python backend supports Custom Metrics allowing users to define and report counters and gauges similar to the C API.

  • Python Triton Client defines the Triton Client Plugin API allowing users to register custom plugins to add or modify request headers.
    This feature is in beta and is subject to change in future releases.

  • Improved performance of model instance creation/removal. When the model instance group is the only model configuration change, Triton will update the model with the number of instances needed rather than reloading the model. This feature is limited to non-sequence models only. Read more about this feature here in bullet point four.

  • Added new command line option --metrics-address=<address> allowing the metrics server to bind to a different address than the default 0.0.0.0.

  • Reduced the default number of model load threads from 2*(number of CPU cores) to 4. This eliminates Triton hitting resource limits on systems with large CPU core counts. Use the --model-load-thread-count command line option to change this default.

  • Added support for DLPack Python specification in Python backend.

  • Refer to the 23.05 column of the Frameworks Support Matrix for container image versions on which the 23.05 inference server container is based.

Known Issues

  • Tensorflow backend no longer supports TensorFlow version 1.

  • OpenVINO 2022.1 is used in the OpenVINO backend and the OpenVINO execution provider for the Onnxruntime Backend. OpenVINO 2022.1 is not officially supported on Ubuntu 22.04 and should be treated as beta.

  • Some systems which implement malloc() may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc and jemalloc are
    installed in the Triton container and can be used by specifying the library in LD_PRELOAD. We recommend experimenting with both tcmalloc and jemalloc to determine which one works better for your use case.

  • Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with --disable-auto-complete-config.

  • Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273.

  • Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA.

    The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.

  • Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU.

    Refer to pytorch/pytorch#66930 for more information.

  • Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).

  • Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.

Client Libraries and Examples

Ubuntu 20.04 builds of the client libraries and examples are included in this release in the attached v2.34.0_ubuntu2004.clients.tar.gz file. The SDK is also available for as an Ubuntu 20.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.

For windows, the client libraries and some examples are available in the attached tritonserver2.34.0-sdk-win.zip file.

Windows Support

A beta release of Triton for Windows is provided in the attached file:tritonserver2.34.0-win.zip. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:

  • HTTP/REST and GRPC endpoints are supported.

  • ONNX models are supported by the ONNX Runtime backend. The ONNX Runtime version is 1.15.0. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.

  • OpenVINO models are supported. The OpenVINO version is 2021.4.

  • Prometheus metrics endpoint is not supported.

  • System and CUDA shared memory are not supported.

To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:

  • CUDA 12.1.1

  • cuDNN 8.9.1.23

  • TensorRT 8.6.1.6

Jetson Jetpack Support

A release of Triton for JetPack is provided in the attached tar file: tritonserver2.34.0-jetpack5.1.tgz.

  • This release supports TensorFlow 2.12.0, TensorRT 8.5.2.2, Onnx Runtime 1.15.0, PyTorch 2.0.0a0+8aa34602, Python 3.8 and as well as ensembles.
  • Onnx Runtime backend does not support the OpenVino and TensorRT execution providers. The CUDA execution provider is in Beta.
  • System shared memory is supported on Jetson. CUDA shared memory is not supported.
  • GPU metrics, GCS storage, S3 storage and Azure storage are not supported.

The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples. For more information on how to install and use Triton on JetPack refer to jetson.md.

The wheel for the Python client library is present in the tar file and can be installed by running the following command:

python3 -m pip install --upgrade clients/python/tritonclient-2.34.0-py3-none-manylinux2014_aarch64.whl[all]

Release 2.33.0 corresponding to NGC container 23.04

26 Apr 01:02
f4c87a8
Compare
Choose a tag to compare

Triton Inference Server

The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.

What's New in 2.33.0

Known Issues

  • Tensorflow backend no longer supports TensorFlow version 1.

  • Triton Inferentia guide is out of date. Some users have reported issues with running Triton on AWS Inferentia instances.

  • Some systems which implement malloc() may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc is installed in the Triton container and can be used by specifying the library in LD_PRELOAD.

  • Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with --disable-auto-complete-config.

  • Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273.

  • Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA.

    The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.

  • Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU.

    Refer to pytorch/pytorch#66930 for more information.

  • Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).

  • Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.

Client Libraries and Examples

Ubuntu 20.04 builds of the client libraries and examples are included in this release in the attached v2.33.0_ubuntu2004.clients.tar.gz file. The SDK is also available for as an Ubuntu 20.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.

For windows, the client libraries and some examples are available in the attached tritonserver2.33.0-sdk-win.zip file.

Windows Support

A beta release of Triton for Windows is provided in the attached file:tritonserver2.33.0-win.zip. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:

  • HTTP/REST and GRPC endpoints are supported.

  • ONNX models are supported by the ONNX Runtime backend. The ONNX Runtime version is 1.14.1. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.

  • OpenVINO models are supported. The OpenVINO version is 2021.4.

  • Prometheus metrics endpoint is not supported.

  • System and CUDA shared memory are not supported.

To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:

  • CUDA 11.8.0

  • cuDNN 8.8.1.3

  • TensorRT 8.5.3.1

Jetson Jetpack Support

Note
In order to build Jetson target from source code please refer to the "r23.04-jetson" branch for "python_backend".

A release of Triton for JetPack is provided in the attached tar file: tritonserver2.33.0-jetpack5.1.tgz.

  • This release supports TensorFlow 2.12.0, TensorRT 8.5.2.2, Onnx Runtime 1.14.1, PyTorch 2.0.0a0+8aa34602, Python 3.8 and as well as ensembles.
  • Onnx Runtime backend does not support the OpenVino and TensorRT execution providers. The CUDA execution provider is in Beta.
  • System shared memory is supported on Jetson. CUDA shared memory is not supported.
  • GPU metrics, GCS storage, S3 storage and Azure storage are not supported.

The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples. For more information on how to install and use Triton on JetPack refer to jetson.md.

The wheel for the Python client library is present in the tar file and can be installed by running the following command:

python3 -m pip install --upgrade clients/python/tritonclient-2.33.0-py3-none-manylinux2014_aarch64.whl[all]

Release 2.32.0 corresponding to NGC container 23.03

28 Mar 22:16
17e971b
Compare
Choose a tag to compare

Triton Inference Server

The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.

What's New in 2.32.0

  • Added the Parameters Extension which allows an inference request to provide custom parameters that cannot be provided as inputs. These parameters can be used in the python backend as described here.

  • Added support for models that use decoupled API for Business Scripting Logic (BLS) in Python backend. Examples can be found here.

  • The same model name can be used across different repositories if the --model-namespacing flag is set.

  • Triton’s Response Cache feature has been converted internally to a shared library implementation of the new TRITONCACHE APIs, similar to how backends and repo agents are used today. The default cache implementation is local_cache, which is equivalent to the fixed-size in-memory buffer implementation used before. The --response-cache-byte-size flag will continue to function in the same way, but the --cache-config flag will be the preferred method of cache configuration moving forward. For more information, see the cache documentation here.

  • Triton’s trace tool now supports tracing for request_id.

  • Refer to the 23.03 column of the Frameworks Support Matrix for container image versions on which the 23.03 inference server container is based.

Known Issues

  • Support for TensorFlow1 will be removed starting from 23.04.

  • Triton Inferentia guide is out of date. Some users have reported issues with running Triton on AWS Inferentia instances.

  • Some systems which implement malloc() may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc is installed in the Triton container and can be used by specifying the library in LD_PRELOAD.

  • Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with --disable-auto-complete-config.

  • Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273

  • Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA.

    The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.

  • Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU.

    Refer to pytorch/pytorch#66930 for more information.

  • Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).

  • Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.

Client Libraries and Examples

Ubuntu 20.04 builds of the client libraries and examples are included in this release in the attached v2.32.0_ubuntu2004.clients.tar.gz file. The SDK is also available for as an Ubuntu 20.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.

For windows, the client libraries and some examples are available in the attached tritonserver2.32.0-sdk-win.zip file.

Windows Support

A beta release of Triton for Windows is provided in the attached file:tritonserver2.32.0-win.zip. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:

  • HTTP/REST and GRPC endpoints are supported.

  • ONNX models are supported by the ONNX Runtime backend. The ONNX Runtime version is 1.14.1. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.

  • OpenVINO models are supported. The OpenVINO version is 2021.4.

  • Prometheus metrics endpoint is not supported.

  • System and CUDA shared memory are not supported.

To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:

  • CUDA 11.8.0

  • cuDNN 8.8.1.3

  • TensorRT 8.5.3.1

Jetson Jetpack Support

A release of Triton for JetPack is provided in the attached tar file: tritonserver2.32.0-jetpack5.1.tgz.

  • This release supports TensorFlow 2.11.0, TensorFlow 1.15.5, TensorRT 8.5.2.2, Onnx Runtime 1.14.1, PyTorch2.0.0, Python 3.8 and as well as ensembles.
  • Onnx Runtime backend does not support the OpenVino and TensorRT execution providers. The CUDA execution provider is in Beta.
  • System shared memory is supported on Jetson. CUDA shared memory is not supported.
  • GPU metrics, GCS storage, S3 storage and Azure storage are not supported.

The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples. For more information on how to install and use Triton on JetPack refer to jetson.md.

The wheel for the Python client library is present in the tar file and can be installed by running the following command:

python3 -m pip install --upgrade clients/python/tritonclient-2.32.0-py3-none-manylinux2014_aarch64.whl[all]

Release 2.31.0 corresponding to NGC container 23.02

01 Mar 05:21
60f7af6
Compare
Choose a tag to compare

Triton Inference Server

The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.

What's New in 2.31.0

  • Support for ensemble models in Model Analyzer.

  • Support for GRPC Standard Health Check Protocol

  • Fixed intermittent hangs during model loading for Python backend.

  • Refer to the 23.02 column of the Frameworks Support Matrix for container image versions on which the 23.02 inference server container is based.

Known Issues

  • In some rare cases Triton might overwrite input tensors while they are still in use which leads to corrupt input data being used for inference with TensorRT models. If you encounter accuracy issues with your TensorRT model, you can work-around the issue by enabling the output_copy_stream option in your model's configuration.

  • Some systems which implement malloc() may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc is installed in the Triton container and can be used by specifying the library in LD_PRELOAD.

  • When using a custom operator for the PyTorch backend, the operator may not be loaded due to undefined Python library symbols. This can be work-around by specifying Python library in LD_PRELOAD.

  • Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with --disable-auto-complete-config.

  • Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273

  • Perf Analyzer stability criteria has been changed which may result in reporting instability for scenarios that were previously considered stable. This change has been made to improve the accuracy of Perf Analyzer results. If you observe this message, it can be resolved by increasing the --measurement-interval in the time windows mode or --measurement-request-count in the count windows mode.

  • Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA.

    The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.

  • Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU.

    Refer to pytorch/pytorch#66930 for more information.

  • Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).

  • Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.

Client Libraries and Examples

Ubuntu 20.04 builds of the client libraries and examples are included in this release in the attached v2.31.0_ubuntu2004.clients.tar.gz file. The SDK is also available for as an Ubuntu 20.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.

For windows, the client libraries and some examples are available in the attached tritonserver2.31.0-sdk-win.zip file.

Windows Support

A beta release of Triton for Windows is provided in the attached file:tritonserver2.31.0-win.zip. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:

  • HTTP/REST and GRPC endpoints are supported.

  • ONNX models are supported by the ONNX Runtime backend. The ONNX Runtime version is 1.13.1. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.

  • OpenVINO models are supported. The OpenVINO version is 2021.4.

  • Prometheus metrics endpoint is not supported.

  • System and CUDA shared memory are not supported.

To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:

  • CUDA 11.8.0

  • cuDNN 8.7.0.84

  • TensorRT 8.5.1.7

Jetson Jetpack Support

A release of Triton for JetPack is provided in the attached tar file: tritonserver2.31.0-jetpack5.1.tgz.

  • This release supports TensorFlow 2.11.0, TensorFlow 1.15.5, TensorRT 8.5.2.2, Onnx Runtime 1.13.1, PyTorch 1.14.0, Python 3.8 and as well as ensembles.
  • Onnx Runtime backend does not support the OpenVino and TensorRT execution providers. The CUDA execution provider is in Beta.
  • System shared memory is supported on Jetson. CUDA shared memory is not supported.
  • GPU metrics, GCS storage, S3 storage and Azure storage are not supported.

The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples. For more information on how to install and use Triton on JetPack refer to jetson.md.

The wheel for the Python client library is present in the tar file and can be installed by running the following command:

python3 -m pip install --upgrade clients/python/tritonclient-2.31.0-py3-none-manylinux2014_aarch64.whl[all]

Release 2.30.0 corresponding to NGC container 23.01

01 Feb 04:47
Compare
Choose a tag to compare

Triton Inference Server

The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.

What's New in 2.30.0

  • The dynamic batcher now accepts user-defined batching constraints, allowing users to specify custom batching strategies.
  • Relaxed Python client gRPC version requirement.
  • Refer to the 23.01 column of the Frameworks Support Matrix for container image versions on which the 23.01 inference server container is based.

Known Issues

  • In some rare cases Triton might overwrite input tensors while they are still in use which leads to corrupt input data being used for inference with TensorRT models. If you encounter accuracy issues with your TensorRT model, you can work-around the issue by enabling the output_copy_stream option in your model's configuration.

  • Some systems which implement malloc() may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc is installed in the Triton container and can be used by specifying the library in LD_PRELOAD.

  • When using a custom operator for the PyTorch backend, the operator may not be loaded due to undefined Python library symbols. This can be work-around by specifying Python library in LD_PRELOAD.

  • Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with --disable-auto-complete-config.

  • Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273

  • Perf Analyzer stability criteria has been changed which may result in reporting instability for scenarios that were previously considered stable. This change has been made to improve the accuracy of Perf Analyzer results. If you observe this message, it can be resolved by increasing the --measurement-interval in the time windows mode or --measurement-request-count in the count windows mode.

  • Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA.

    The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.

  • Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU.

    Refer to pytorch/pytorch#66930 for more information.

  • Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).

  • Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.

Client Libraries and Examples

Ubuntu 20.04 builds of the client libraries and examples are included in this release in the attached v2.30.0_ubuntu2004.clients.tar.gz file. The SDK is also available for as an Ubuntu 20.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.

For windows, the client libraries and some examples are available in the attached tritonserver2.30.0-sdk-win.zip file.

Windows Support

A beta release of Triton for Windows is provided in the attached file:tritonserver2.30.0-win.zip. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:

  • HTTP/REST and GRPC endpoints are supported.

  • ONNX models are supported by the ONNX Runtime backend. The ONNX Runtime version is 1.13.1. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.

  • OpenVINO models are supported. The OpenVINO version is 2021.4.

  • Prometheus metrics endpoint is not supported.

  • System and CUDA shared memory are not supported.

To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:

  • CUDA 11.8.0

  • cuDNN 8.7.0.84

  • TensorRT 8.5.1.7

Jetson Jetpack Support

A release of Triton for JetPack is provided in the attached tar file: tritonserver2.30.0-jetpack5.1.tgz.

  • This release supports TensorFlow 2.11.0, TensorFlow 1.15.5, TensorRT 8.5.2.1, Onnx Runtime 1.13.1, PyTorch 1.14.0, Python 3.8 and as well as ensembles.
  • Onnx Runtime backend does not support the OpenVino and TensorRT execution providers. The CUDA execution provider is in Beta.
  • System shared memory is supported on Jetson. CUDA shared memory is not supported.
  • GPU metrics, GCS storage, S3 storage and Azure storage are not supported.

The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples. For more information on how to install and use Triton on JetPack refer to jetson.md.

The wheel for the Python client library is present in the tar file and can be installed by running the following command:

python3 -m pip install --upgrade clients/python/tritonclient-2.30.0-py3-none-manylinux2014_aarch64.whl[all]

Release 2.29.0 corresponding to NGC container 22.12

20 Dec 19:59
2d77fd0
Compare
Choose a tag to compare

Triton Inference Server

The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.

What's New in 2.29.0

  • Improvements to container and non-container builds on Windows.

  • Refer to the 22.12 column of the Frameworks Support Matrix for container image versions on which the 22.12 inference server container is based.

Known Issues

  • In some rare cases Triton might overwrite input tensors while they are still in use which leads to corrupt input data being used for inference with TensorRT models. If you encounter accuracy issues with your TensorRT model, you can work-around the issue by enabling the output_copy_stream option in your model's configuration.

  • Some systems which implement malloc() may not release memory back to the operating system right away causing a false memory leak. This can be mitigated by using a different malloc implementation. Tcmalloc is installed in the Triton container and can be used by specifying the library in LD_PRELOAD.

  • When using a custom operator for the PyTorch backend, the operator may not be loaded due to undefined Python library symbols. This can be work-around by specifying Python library in LD_PRELOAD.

  • Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with --disable-auto-complete-config.

  • Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273

  • Perf Analyzer stability criteria has been changed which may result in reporting instability for scenarios that were previously considered stable. This change has been made to improve the accuracy of Perf Analyzer results. If you observe this message, it can be resolved by increasing the --measurement-interval in the time windows mode or --measurement-request-count in the count windows mode.

  • Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA.

    The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.

  • Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU.

    Refer to pytorch/pytorch#66930 for more information.

  • Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).

  • Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.

Client Libraries and Examples

Ubuntu 20.04 builds of the client libraries and examples are included in this release in the attached v2.29.0_ubuntu2004.clients.tar.gz file. The SDK is also available for as an Ubuntu 20.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.

For windows, the client libraries and some examples are available in the attached tritonserver2.29.0-sdk-win.zip file.

Windows Support

A beta release of Triton for Windows is provided in the attached file:tritonserver2.29.0-win.zip. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:

  • HTTP/REST and GRPC endpoints are supported.

  • ONNX models are supported by the ONNX Runtime backend. The ONNX Runtime version is 1.13.1. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.

  • OpenVINO models are supported. The OpenVINO version is 2021.4.

  • Prometheus metrics endpoint is not supported.

  • System and CUDA shared memory are not supported.

To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:

  • CUDA 11.8.0

  • cuDNN 8.7.0.84

  • TensorRT 8.5.1.7

Jetson Jetpack Support

NOTE: There is no Jetpack release for 22.12, the latest release is 22.10.

Release 2.28.0 corresponding to NGC container 22.11

22 Nov 21:26
Compare
Choose a tag to compare

Triton Inference Server

The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.

What's New in 2.28.0

  • Support new TensorRT 8.5 features. Including:

    • UINT8 I/O
    • “Data dependent dynamic shapes" operators (i.e. ONNX NMS and NonZero
      operations)
  • Support execution environment paths outside model directory. This can be done via EXECUTION_ENV_PATH parameter in config.pbtxt. Refer to the python backend README for known limitations.

  • Refer to the 22.11 column of the Frameworks Support Matrix for container image versions on which the 22.11 inference server container is based.

Known Issues

  • In some rare cases Triton might overwrite input tensors while they are still in use which leads to corrupt input data being used for inference with TensorRT models. If you encounter accuracy issues with your TensorRT model, you can work-around the issue by enabling the output_copy_stream option in your model's configuration.

  • Triton's TensorRT support depends on the CUDA event synchronization. In some rare cases the events may be triggered earlier than expected, causing Triton to overwrite input tensors while they are still in use and leading to corrupt input data being used for inference. If you encounter accuracy issues with your TensorRT model, you can work-around the issue by enabling the output_copy_stream option in your model's configuration.

  • When using a custom operator for the PyTorch backend, the operator may not be loaded due to undefined Python library symbols. This can be work-around by specifying Python library in LD_PRELOAD.

  • Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with --disable-auto-complete-config.

  • Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273

  • Perf Analyzer stability criteria has been changed which may result in reporting instability for scenarios that were previously considered stable. This change has been made to improve the accuracy of Perf Analyzer results. If you observe this message, it can be resolved by increasing the --measurement-interval in the time windows mode or --measurement-request-count in the count windows mode.

  • Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA.

    The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.

  • Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU.

    Refer to pytorch/pytorch#66930 for more information.

  • Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).

  • Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.

Client Libraries and Examples

Ubuntu 20.04 builds of the client libraries and examples are included in this release in the attached v2.28.0_ubuntu2004.clients.tar.gz file. The SDK is also available for as an Ubuntu 20.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.

For windows, the client libraries and some examples are available in the attached tritonserver2.28.0-sdk-win.zip file.

Windows Support

A beta release of Triton for Windows is provided in the attached file:tritonserver2.28.0-win.zip. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:

  • HTTP/REST and GRPC endpoints are supported.

  • ONNX models are supported by the ONNX Runtime backend. The ONNX Runtime version is 1.13.1. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.

  • OpenVINO models are supported. The OpenVINO version is 2021.4.

  • Prometheus metrics endpoint is not supported.

  • System and CUDA shared memory are not supported.

To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:

  • CUDA 11.8.0

  • cuDNN 8.7.0.80

  • TensorRT 8.5.1.7

Jetson Jetpack Support

NOTE: There is no Jetpack release for 22.11, the latest release is 22.10.

Release 2.27.0 corresponding to NGC container 22.10

02 Nov 22:20
Compare
Choose a tag to compare

Triton Inference Server

The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.

What's New in 2.27.0

Known Issues

  • Triton's TensorRT support depends on the CUDA event synchronization. In some rare cases the events may be triggered earlier than expected, causing Triton to overwrite input tensors while they are still in use and leading to corrupt input data being used for inference. If you encounter accuracy issues with your TensorRT model, you can work-around the issue by enabling the output_copy_stream option in your model's configuration.

  • When using a custom operator for the PyTorch backend, the operator may not be loaded due to undefined Python library symbols. This can be work-around by specifying Python library in LD_PRELOAD.

  • Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with --disable-auto-complete-config.

  • Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273

  • Perf Analyzer stability criteria has been changed which may result in reporting instability for scenarios that were previously considered stable. This change has been made to improve the accuracy of Perf Analyzer results. If you observe this message, it can be resolved by increasing the --measurement-interval in the time windows mode or --measurement-request-count in the count windows mode.

  • Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA.

    The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.

  • Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU.

    Refer to pytorch/pytorch#66930 for more information.

  • Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).

  • Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.

Client Libraries and Examples

Ubuntu 20.04 builds of the client libraries and examples are included in this release in the attached v2.27.0_ubuntu2004.clients.tar.gz file. The SDK is also available for as an Ubuntu 20.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.

For windows, the client libraries and some examples are available in the attached tritonserver2.27.0-sdk-win.zip file.

Windows Support

A beta release of Triton for Windows is provided in the attached file:tritonserver2.27.0-win.zip. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:

  • HTTP/REST and GRPC endpoints are supported.

  • ONNX models are supported by the ONNX Runtime backend. The ONNX Runtime version is 1.13.1. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.

  • OpenVINO models are supported. The OpenVINO version is 2021.4.

  • Prometheus metrics endpoint is not supported.

  • System and CUDA shared memory are not supported.

To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:

  • CUDA 11.8.0

  • cuDNN 8.6.0.163

  • TensorRT 8.5.0.12

Jetson Jetpack Support

A release of Triton for JetPack is provided in the attached tar file: A release of Triton for JetPack is provided in the attached tar file: tritonserver2.27.0-jetpack5.0.2.tgz.

  • This release supports TensorFlow 2.10.0, TensorFlow 1.15.5, TensorRT 8.4.1.5, Onnx Runtime 1.13.1, PyTorch 1.13.0, Python 3.8 and as well as ensembles.
  • Onnx Runtime backend does not support the OpenVino and TensorRT execution providers. The CUDA execution provider is in Beta.
  • System shared memory is supported on Jetson. CUDA shared memory is not supported.
  • GPU metrics, GCS storage, S3 storage and Azure storage are not supported.

The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples. For more information on how to install and use Triton on JetPack refer to jetson.md.

The wheel for the Python client library is present in the tar file and can be installed by running the following command:

python3 -m pip install --upgrade clients/python/tritonclient-2.27.0-py3-none-manylinux2014_aarch64.whl[all]

Release 2.26.0 corresponding to NGC container 22.09

04 Oct 00:55
Compare
Choose a tag to compare

Triton Inference Server

The Triton Inference Server provides a cloud inferencing solution optimized for both CPUs and GPUs. The server provides an inference service via an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. For edge deployments, Triton Server is also available as a shared library with an API that allows the full functionality of the server to be included directly in an application.

What's New in 2.26.0

  • Added developer tools Github repository that provides a simplified interface for users to interact with the Triton Core shared library.
    These developer tools are in beta and are subject to change.

  • Added CPU metrics reporting in Triton’s Prometheus metrics endpoint.

  • Added logging protocol extension for users to change logging configuration dynamically.

  • Users can specify the custom plugins to be loaded for TensorRT backend through command line option in addition to LD_PRELOAD.

  • Enabled auto-completion for OpenVINO backend.

  • Enabled Python backend to log messages through Triton’s logger.

  • Refer to the 22.09 column of the Frameworks Support Matrix for container image versions on which the 22.09 inference server container is based.

  • Added quick search algorithm to Model Analyzer to drastically reduce search time.

  • Added GPU metrics gathering to Perf Analyzer, which is also used by Model Analyzer to improve accuracy of those metrics.

  • NGC container release 22.09 supports CUDA compute capability 6.0 and later. This corresponds to GPUs in the NVIDIA Pascal, NVIDIA Volta™, NVIDIA Turing™, NVIDIA Ampere architecture, and NVIDIA Hopper™ architecture families.

Known Issues

  • In certain rare cases with specific backends, triton server may crash with segmentation fault when exiting. Preliminary analysis shows that there might be a race condition in clean up of backend/model/instance state objects. Exact root cause is still unknown.

  • Triton's TensorRT support depends on the CUDA event synchronization. In some rare cases the events may be triggered earlier than expected, causing Triton to overwrite input tensors while they are still in use and leading to corrupt input data being used for inference. If you encounter accuracy issues with your TensorRT model, you can work-around the issue by enabling the output_copy_stream option in your model's configuration.

  • When using a custom operator for the PyTorch backend, the operator may not be loaded due to undefined Python library symbols. This can be work-around by specifying Python library in LD_PRELOAD

  • Auto-complete may cause an increase in server start time. To avoid a start time increase, users can provide the full model configuration and launch the server with --disable-auto-complete-config.

  • Auto-complete does not support PyTorch models due to lack of metadata in the model. It can only verify that the number of inputs and the input names matches what is specified in the model configuration. There is no model metadata about the number of outputs and datatypes. Related PyTorch bug: pytorch/pytorch#38273

  • Perf Analyzer stability criteria has been changed which may result in reporting instability for scenarios that were previously considered stable. This change has been made to improve the accuracy of Perf Analyzer results. If you observe this message, it can be resolved by increasing the --measurement-interval in the time windows mode or --measurement-request-count in the count windows mode.

  • Triton Client PIP wheels for ARM SBSA are not available from PyPI and pip will install an incorrect Jetson version of Triton Client library for Arm SBSA.

  • The correct client wheel file can be pulled directly from the Arm SBSA SDK image and manually installed.

  • Traced models in PyTorch seem to create overflows when int8 tensor values are transformed to int32 on the GPU.

  • Refer to pytorch/pytorch#66930 for more information.

  • Triton cannot retrieve GPU metrics with MIG-enabled GPU devices (A100 and A30).

  • Triton metrics might not work if the host machine is running a separate DCGM agent on bare-metal or in a container.

Client Libraries and Examples

Ubuntu 20.04 builds of the client libraries and examples are included in this release in the attached v2.26.0_ubuntu2004.clients.tar.gz file. The SDK is also available for as an Ubuntu 20.04 based NGC Container. The SDK container includes the client libraries and examples, Performance Analyzer and Model Analyzer. Some components are also available in the tritonclient pip package. See Getting the Client Libraries for more information on each of these options.

For windows, the client libraries and some examples are available in the attached tritonserver2.26.0-sdk-win.zip file.

Windows Support

A beta release of Triton for Windows is provided in the attached file:tritonserver2.26.0-win.zip. This is a beta release so functionality is limited and performance is not optimized. Additional features and improved performance will be provided in future releases. Specifically in this release:

  • HTTP/REST and GRPC endpoints are supported.

  • ONNX models are supported by the ONNX Runtime backend. The ONNX Runtime version is 1.12.1. The CPU, CUDA, and TensorRT execution providers are supported. The OpenVINO execution provider is not supported.

  • OpenVINO models are supported. The OpenVINO version is 2021.4.

  • Prometheus metrics endpoint is not supported.

  • System and CUDA shared memory are not supported.

To use the Windows version of Triton, you must install all the necessary dependencies on your Windows system. These dependencies are available in the Dockerfile.win10.min. The Dockerfile includes the following CUDA-related components:

  • CUDA 11.8.0

  • cuDNN 8.6.0.163

  • TensorRT 8.5.0.12

Jetson Jetpack Support

A release of Triton for JetPack is provided in the attached tar file: A release of Triton for JetPack is provided in the attached tar file: tritonserver2.26.0-jetpack5.0.2.tgz.

  • This release supports TensorFlow 2.9.1, TensorFlow 1.15.5, TensorRT 8.4.1.5, Onnx Runtime 1.12.0, PyTorch 1.13.0, Python 3.8 and as well as ensembles.
  • Onnx Runtime backend does not support the OpenVino and TensorRT execution providers. The CUDA execution provider is in Beta.
  • System shared memory is supported on Jetson. CUDA shared memory is not supported.
  • GPU metrics, GCS storage, S3 storage and Azure storage are not supported.

The tar file contains the Triton server executable and shared libraries and also the C++ and Python client libraries and examples. For more information on how to install and use Triton on JetPack refer to jetson.md.

The wheel for the Python client library is present in the tar file and can be installed by running the following command:

python3 -m pip install --upgrade clients/python/tritonclient-2.26.0-py3-none-manylinux2014_aarch64.whl[all]