-
Notifications
You must be signed in to change notification settings - Fork 957
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Understand oneDNN graph compiler #1797
Comments
Hi Aruna, The framework integration is WIP, you may check the status from framework RFC. Currently, the recommended way to try graph compiler with pytorch, is to use IPEX. Graph compiler is enabled by default in its INT8 path, for other data types like float32 or bfloat16, you need to manually turn on oneDNN graph by inserting the below line in the beginning of your model script:
|
@ZhennanQin Thank you |
@ZhennanQin , I was going through the https://arxiv.org/abs/2301.01333, after TensorIR how it calls batch reduce kernel and also I want understand where exactly the microkernel file location in oneDNN repository. |
The brgemm interface location: https://github.com/oneapi-src/oneDNN/blob/main/src/cpu/x64/brgemm/brgemm.hpp |
@ZhennanQin ,Graph compiler calls microkernel from primitive APIs? |
Graph compiler shares the same microkernel implementation with primitive. The brgemm abstraction in graph compiler can be found at: https://github.com/oneapi-src/oneDNN/blob/main/src/graph/backend/graph_compiler/core/src/runtime/microkernel/cpu/microkernel.hpp |
@ZhennanQin , Thank you. |
Hi,
I wanted to understand from framework like Tensorflow or Pytorch how we can enable graph compiler.
cc:@ZhennanQin
Thank you
The text was updated successfully, but these errors were encountered: