Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error with rembg node: TensorRT compatibility issue #356

Open
tristan22mc opened this issue Mar 8, 2024 · 1 comment
Open

Error with rembg node: TensorRT compatibility issue #356

tristan22mc opened this issue Mar 8, 2024 · 1 comment

Comments

@tristan22mc
Copy link

Description:
I am encountering an error when running the rembg node in the "was-node-suite-comfyui" custom node suite for Comfy UI. The error seems to be related to a compatibility issue between the ONNX model used by the rembg node and TensorRT.

Environment:

Python version: 3.10.11
Operating system: Windows
CUDA version: 11
rembg version: 2.0.30
onnxruntime version: 1.13.1
Steps to reproduce:

Install Python 3.10.11 on a Windows machine.
Install the CUDA toolkit version 11.
Install the "was-node-suite-comfyui" custom node suite for Comfy UI.
Install the required packages: pip install rembg==2.0.30 onnxruntime==1.13.1.
Run the Comfy UI workflow that includes the rembg node.
Expected behavior:
The rembg node should successfully remove the background from the input image without any errors.

Actual behavior:
The workflow execution fails with the following error message:

2024-03-07 23:10:34.5957583 [W:onnxruntime:Default, tensorrt_execution_provider.h:83 onnxruntime::TensorrtLogger::log] [2024-03-08 04:10:34 WARNING] onnx2trt_utils.cpp:369: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

2024-03-07 23:11:13.8422513 [W:onnxruntime:Default, tensorrt_execution_provider.h:83 onnxruntime::TensorrtLogger::log] [2024-03-08 04:11:13 WARNING] The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.

Attempted solutions:

Set the environment variable ORT_TENSORRT_ENGINE_CACHE_ENABLE=0 to disable TensorRT.
Install the onnxruntime-gpu package instead of onnxruntime.
Install the onnxruntime-cpu package instead of onnxruntime.
Uninstall and reinstall the rembg and onnxruntime packages with specific versions (rembg==2.0.30 and onnxruntime==1.13.1).
None of the above solutions resolved the issue.

Additional information:

The error message suggests that the ONNX model contains INT64 weights, which are not natively supported by TensorRT, and it is attempting to cast them down to INT32.
The issue persists even when using different execution providers (CPU, GPU) for ONNX Runtime.
I would greatly appreciate any assistance or guidance in resolving this compatibility issue between the rembg node and TensorRT. Please let me know if you need any further information or if there are any specific steps I should take to troubleshoot the problem.

Thank you for your help!

@WASasquatch
Copy link
Owner

I'll have to dive into this later but at the surface I would assume rembg/onnx here are intended for PyTorch native usage?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants