Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to build tensorrt engine with DLA enabled on Jetson Xavier NX #3847

Open
harishkool opened this issue May 7, 2024 · 4 comments
Open
Labels
triaged Issue has been triaged by maintainers

Comments

@harishkool
Copy link

Description

TensorRT engine build failed with error Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[/cnn/cnn.0/Conv]}.

Environment

TensorRT Version: 8.5.2

NVIDIA GPU: Volta GPU

CUDA Version: 11.4

CUDNN Version: 8.6

Operating System: Ubuntu 20

Platform : Jetson Xavier NX

Relevant Files

Model link: https://drive.google.com/file/d/1K5kQxR0IR-SGF6Ry1V44R-bmfwF4NPPx/view?usp=sharing

Steps To Reproduce

  1. Took the example model from https://github.com/NVIDIA-AI-IOT/jetson_dla_tutorial
  2. Exported the model to onnx format.
  3. Tried building the engine with the command /usr/src/tensorrt/bin/trtexec --onnx=model_gn.onnx --shapes=input:32x3x32x32 --saveEngine=model_gn.engine --exportProfile=model_gn.json --int8 --useDLACore=0 --allowGPUFallback --useSpinWait --separateProfileRun
  4. Build failed with the error Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[/cnn/cnn.0/Conv]}
  5. You can check the complete log here https://drive.google.com/file/d/1Ude0Pb3VOb_rzJhbzu_AXtlk8HUUNbzT/view?usp=drive_link

Have you tried the latest release?: N/A

Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt): Yes

@lix19937
Copy link

try to

/usr/src/tensorrt/bin/trtexec --onnx=model_gn.onnx --shapes=input:32x3x32x32 --saveEngine=model_gn.engine --exportProfile=model_gn.json --best --useDLACore=0 --allowGPUFallback --useSpinWait --separateProfileRun

@harishkool
Copy link
Author

Same

[05/10/2024-12:14:41] [E] Error[10]: [optimizer.cpp::computeCosts::3728] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[/cnn/cnn.0/Conv]}.)
[05/10/2024-12:14:41] [E] Error[2]: [builder.cpp::buildSerializedNetwork::751] Error Code 2: Internal Error (Assertion engine != nullptr failed. )
[05/10/2024-12:14:41] [E] Engine could not be created from network
[05/10/2024-12:14:41] [E] Building engine failed
[05/10/2024-12:14:41] [E] Failed to create engine from model or file.
[05/10/2024-12:14:41] [E] Engine set up failed

You can find the verbose log here https://drive.google.com/file/d/17o5k7_1ZPEd_iNScTUOKKRsa167VWjWs/view?usp=drive_link.

@lix19937
Copy link

lix19937 commented May 11, 2024

Check your conv layer match condition or not ? The layer support and restrictions to the specified layers while running on DLA, see https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#dla-lay-supp-rest

On the other way, you can update the latest version of trt.

@zerollzeng zerollzeng added the triaged Issue has been triaged by maintainers label May 12, 2024
@harishkool
Copy link
Author

harishkool commented May 19, 2024

I took the example model from Jetson DLA tutorial https://github.com/NVIDIA-AI-IOT/jetson_dla_tutorial, it supports.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triaged Issue has been triaged by maintainers
Projects
None yet
Development

No branches or pull requests

3 participants