-
Notifications
You must be signed in to change notification settings - Fork 26
Issues: intel/intel-xpu-backend-for-triton
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Add daily E2E accuracy runs on LTS driver to CI (hugging_face-training-float32)
ci
enhancement
New feature or request
tests: e2e
Add build and test check (integration tests) on LTS driver to each PR
ci
enhancement
New feature or request
tests: tutorials
tests: ut
AllenaiLongformerBase HF Train Float32 fails under LTS Driver
#1336
opened Jun 12, 2024 by
alexbaden
There is a correctness issue when using the TritonGen subgroup Max/Min reduce for integer
#1332
opened Jun 12, 2024 by
chengjunlu
new LTS failures introduced between d4fdf8e3212923c003ded706a809b7406d4de4f1 and HEAD
bug
Something isn't working
tests: ut
#1329
opened Jun 11, 2024 by
alexbaden
Remove redundant attributes specifying the number of threads per warp
enhancement
New feature or request
[GEN] Use OCL builtin for 2D block prefetch available on driver 881.12
dependencies
enhancement
New feature or request
Port New feature or request
tests: ut
upstream: rebase
PR to be up-streamed
python/test/unit/instrumentation/test_gpuhello.py
to XPU
enhancement
[GEMM] Fix functional issues with bfloat16 accumulation
bug
Something isn't working
codegen: mlir
tests: tutorials
[PyTorch Upstream] HuggingFace two model E2E training accuracy regression.
accuracy
bug
Something isn't working
tests: e2e
upstream: pytorch
Previous Next
ProTip!
What’s not been updated in a month: updated:<2024-05-12.