Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batch Matrix Multiplication using CuBLAS #313

Open
Darshcg opened this issue Feb 18, 2021 · 0 comments
Open

Batch Matrix Multiplication using CuBLAS #313

Darshcg opened this issue Feb 18, 2021 · 0 comments

Comments

@Darshcg
Copy link

Darshcg commented Feb 18, 2021

Hi @lebedov,

Thanks for your Great Work.

Actually, I am working on registering a Plugin for an Operator(Einsum) which is not currently supported in TensorRT. So, instead of implementing a CUDA Kernel, I want to use the CuBLAS Library for Batch Matrix Multiplication.

The Equations I want to implement is(from Einsum Operator):
"ntg, ncg → nct" and " nct, ncp-> ntp"(for Batch Matrix Multiplication)

Info about Einsum op: https://github.com/onnx/onnx/blob/master/docs/Operators.md#Einsum
I needed a guidance in using CuBLAS Library for Batched Matrix Multiplication for the above two Ops.

I am referring to the Available references(https://docs.nvidia.com/cuda/cublas/index.html#cublas-lt-t-gt-gemmbatched), but I am not getting how to use it for the above Equations.

Can you please assist me for the same?

Thanks in Advance,
Darshan C G

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant