Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There is a problem with Topk output #3854

Open
lcx1874000 opened this issue May 10, 2024 · 3 comments
Open

There is a problem with Topk output #3854

lcx1874000 opened this issue May 10, 2024 · 3 comments
Assignees
Labels
triaged Issue has been triaged by maintainers

Comments

@lcx1874000
Copy link

Description

Environment

TensorRT Version:8.6.1

NVIDIA GPU:3080Ti

NVIDIA Driver Version:536.40

CUDA Version:12.1

CUDNN Version:6.5.0

Relevant Files

    nvinfer1::IBuilder*`` builder = nvinfer1::createInferBuilder(gLogger);
nvinfer1::IBuilderConfig* config = builder->createBuilderConfig();
nvinfer1::INetworkDefinition* network = builder->createNetworkV2(1);
nvinfer1::ITensor* input_data = network->addInput("input", nvinfer1::DataType::kFLOAT, nvinfer1::Dims4{ 1,2,1024,1024 });
int reduceAxis = 0x1;
nvinfer1::ITopKLayer* topk = network->addTopK(*input_data, nvinfer1::TopKOperation::kMIN, 1, reduceAxis);
int k = topk->getK();
assert(topk);
topk->getOutput(1)->setName("index");	
network->markOutput(*topk->getOutput(1));
topk->getOutput(1)->setType(nvinfer1::DataType::kINT32);
nvinfer1::IHostMemory* modelStream = builder->buildSerializedNetwork(*network, *config);
nvinfer1::IRuntime* runtime1 = nvinfer1::createInferRuntime(gLogger);
auto mEngine = std::shared_ptr<nvinfer1::ICudaEngine>(
	runtime1->deserializeCudaEngine(modelStream->data(), modelStream->size()), samplesCommon::InferDeleter());
//nvinfer1::ICudaEngine* engine1 = runtime1->deserializeCudaEngine(modelStream->data(),modelStream->size());
IExecutionContext* context1 = mEngine->createExecutionContext();
assert(context != nullptr);
int32_t nbBindings1 = mEngine->getNbBindings(); 
nvinfer1::Dims input_topk = mEngine->getBindingDimensions(0);
nvinfer1::Dims output_topk = mEngine->getBindingDimensions(1);

Model link:

Steps To Reproduce

@lcx1874000
Copy link
Author

Description:
I created a Topk network layer separately to replace argmax in segmentation, but there was a problem with the output results, I entered the dimensions 1210241024, I only took the indices of the output, but the dimensions were still 1210241024, and the results were also wrong, all zeros

@zerollzeng
Copy link
Collaborator

Your input is float but the output is int32?

@zerollzeng zerollzeng self-assigned this May 12, 2024
@zerollzeng zerollzeng added the triaged Issue has been triaged by maintainers label May 12, 2024
@zerollzeng
Copy link
Collaborator

More like a code issue to me, perhaps you can create an onnx and use our onnx parser.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triaged Issue has been triaged by maintainers
Projects
None yet
Development

No branches or pull requests

2 participants