Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(webgpu): add tensor operations #904

Draft
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

DonIsaac
Copy link
Contributor

@DonIsaac DonIsaac commented Jan 2, 2024

What This PR Does

Starts a general binary op implementation for Webgpu. This code is based on an (old, in-progress WGPU branch)[https://github.com/DonIsaac/dfdx/tree/don/feat/wgpu2] I made a while ago.

I've (mostly) got forward down, but I think I need some help with backward

Todo

  • Implement BinaryOpWebgpuKernel.forward()
  • Implement UnaryOpWebgpuKernel.forward() (may require push constants)
  • Implement binary add
  • Implement to_dtype
  • Pipeline caching (may require std)
  • Support f16 via shader-f16 feature

Other Notes

  • WebGPU does not support f64 (refer to this issue). If we want to support them We'll need to use a polyfill (e.g. this one]
  • We may need to consider buffer usage flags when caching tensors. Right now, they're all COPY_SRC and COPY_DST, but this leaves them un-mappable without an intermediate MAP_READ buffer.

@DonIsaac
Copy link
Contributor Author

DonIsaac commented Jan 2, 2024

Note: I've commented out todo implementations for all BinaryKernel and UnaryKernel traits, and I've commented out the top-level exports for Webgpu while I write code. This is so I can build it without getting 100+ compilation errors; I'll re-add these later as needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant