-
Notifications
You must be signed in to change notification settings - Fork 330
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
❓ [Question] How to specific aten operators must be run by LibTorch in C++? #2830
Comments
I came up with this solution. I use this code below to replace % op: def TakeRemainder(x: int, y: int) -> int:
return x - y * int(x / y) And it works. I want to know why this setting doesn't take effect. compile_settings.torch_executed_ops.push_back("aten::floor_divide"); |
Hi - thanks for the report. I think this may be related to the following lowering pass, where it's possible that both inputs are upcasted integers, so we accidentally construct a schema which is no longer valid: TensorRT/core/lowering/passes/remove_unnecessary_casts.cpp Lines 135 to 141 in 4b993f8
Regarding why |
So this is a bug, right? Will you fix this bug in the future? |
Yes, this appears to be bug and we can work on a fix for this. Do you have a reproducer script or model we could use to recreate the error? |
This is code: torch::Device* device_ = new torch::Device(torch::DeviceType::CUDA);
device_->set_index(0);
torch::jit::script::Module model = torch::jit::load(model_path);
model.to("cuda");
model.eval();
model.to(torch::kHalf);
std::vector<int64_t> input_dim{1, 3, 832, 1440};
auto input = torchtrt::Input(input_dim, torchtrt::DataType::kHalf);
size_t _1_GB = 1 << 30;
torchtrt::ts::CompileSpec compile_settings({ input });
compile_settings.enabled_precisions.insert(torchtrt::DataType::kHalf);
compile_settings.workspace_size = _1_GB;
compile_settings.truncate_long_and_double = true;
compile_settings.num_avg_timing_iters = 1;
torchtrt::ts::compile(model, compile_settings); Additionally, I provide you with the model with google dirve. |
Hello - thanks for the details. I am unable to access the model at that link, is the model available elsewhere? Also, could you provide the full output debug log as well - using the following logging level: |
I changed the access to the model, The model link is accessible. |
❓ Question
When I compile the SwinTransformer model using Torch-TensorRT, an error appears:
I checked out this link, This error is because torch-trt dont support % op.
Fine, I can select to run floor_divide using LibTorch.
It's strange that the setting does not take effect. This error still persists.
What can I do about this mistake?
Furthermore, How to specific aten operators must be run by LibTorch in C++?
Environment
conda
,pip
,libtorch
, source):The text was updated successfully, but these errors were encountered: