Skip to content

triton-inference-server/tensorrt_backend

Repository files navigation

License

TensorRT Backend

The Triton backend for TensorRT. You can learn more about Triton backends in the backend repo. Ask questions or report problems on the issues page. This backend is designed to run a serialized TensorRT engine models using the TensorRT C++ API.

Where can I ask general questions about Triton and Triton backends? Be sure to read all the information below as well as the general Triton documentation available in the main server repo. If you don't find your answer there you can ask questions on the main Triton issues page.

Command-line Options

The command-line options configure properties of the TensorRT backend that are then applied to all models that use the backend.

Below is an example of how to specify the backend config and the full list of options.

--backend-config=tensorrt,coalesce-request-input=<boolean>,plugins="/path/plugin1.so;/path2/plugin2.so,version-compatible=true"
  • The coalesce-request-input flag instructs TensorRT to consider the requests' inputs with the same name as one contiguous buffer if their memory addresses align with each other. This option should only be enabled if all requests' input tensors are allocated from the same memory region. The default value is false.

  • The execution-policy flag instructs TensorRT backend to execute the model with different Triton execution policies (see TRITONBACKEND_ExecutionPolicy for detail). Currently the following values are accepted:

    • DEVICE_BLOCKING: corresponds to TRITONBACKEND_EXECUTION_DEVICE_BLOCKING, this option can be set to avoid possible CUDA contention from launching many kernels from multiple threads.
    • BLOCKING: corresponds to TRITONBACKEND_EXECUTION_BLOCKING, this option can be set to overlap the host thread workload between model instances.
  • The plugins flag provides a way to load any custom TensorRT plugins that your models rely on. If you have multiple plugins to load, use a semicolon as the delimiter.

  • The version-compatible flag enables the loading of version-compatible TensorRT models where the version of TensorRT used for building does not matching the engine version used by Triton. You must trust the models loaded in this mode, as version-compatible models include a lean runtime which gets deserialized and executed by Triton. You can find more information in the TensorRT documentation here. The default value is false.

Build the TensorRT Backend

Appropriate version of TensorRT must be installed on the system. Check the support matrix to find the correct version of TensorRT to be installed.

$ mkdir build
$ cd build
$ cmake -DCMAKE_INSTALL_PREFIX:PATH=`pwd`/install ..
$ make install

The following required Triton repositories will be pulled and used in the build. By default the "main" branch/tag will be used for each repo but the listed CMake argument can be used to override.

  • triton-inference-server/backend: -DTRITON_BACKEND_REPO_TAG=[tag]
  • triton-inference-server/core: -DTRITON_CORE_REPO_TAG=[tag]
  • triton-inference-server/common: -DTRITON_COMMON_REPO_TAG=[tag]