Skip to content

Issues: pytorch/xla

[RFC] PyTorch/XLA Auto-Sharding API
#6322 opened Jan 18, 2024 by yeounoh
Open 10
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Label
Filter by label
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Milestones
Filter by milestone
Assignee
Filter by who’s assigned
Sort

Issues list

DDP Hangs on TPU v3-8
#7109 opened May 24, 2024 by vivekjoshy
Cannot Import _XLAC
#7070 opened May 16, 2024 by DarkenStar
Export nn.Module.forward with kwargs to StableHLO stablehlo StableHLO related work
#7056 opened May 13, 2024 by johnmatter
torchdynamo + XLA crash
#7053 opened May 13, 2024 by pritamdamania87
Migrate PyTorch/XLA's gradient checkpointing to upstream one nostale Do not consider for staleness
#7024 opened May 3, 2024 by JackCaoG
Zero-copy between CUDA and XLA
#6971 opened Apr 25, 2024 by vanbasten23
ProTip! Type g p on any issue or pull request to go back to the pull request listing page.