Enabling PyTorch on XLA Devices (e.g. Google TPU)
-
Updated
May 25, 2024 - C++
Enabling PyTorch on XLA Devices (e.g. Google TPU)
JAX - A curated list of resources https://github.com/google/jax
The purpose of this repo is to make it easy to get started with JAX, Flax, and Haiku. It contains my "Machine Learning with JAX" series of tutorials (YouTube videos and Jupyter Notebooks) as well as the content I found useful while learning about the JAX ecosystem.
ALBERT model Pretraining and Fine Tuning using TF2.0
Zero-copy MPI communication of JAX arrays, for turbo-charged HPC applications in Python ⚡
Julia on TPUs
S + Autograd + XLA :: S-parameter based frequency domain circuit simulations and optimizations using JAX.
Official scala pool repository
基于tensorflow1.x的预训练模型调用,支持单机多卡、梯度累积,XLA加速,混合精度。可灵活训练、验证、预测。
XLA integration of Open Neural Network Exchange (ONNX)
TensorFlow wheels built for latest CUDA/CuDNN and enabled performance flags: SSE, AVX, FMA; XLA
Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload
Provides code to serialize the different models involved in Stable Diffusion as SavedModels and to compile them with XLA.
Automated provisioner of a Google Cloud TPU environment for training in PyTorch
Tensorflow2 training code with jit compiling on multi-GPU.
Presents comprehensive benchmarks of XLA-compatible pre-trained models in Keras.
GoMLX -- Accelerated ML Libraries for Go
Add a description, image, and links to the xla topic page so that developers can more easily learn about it.
To associate your repository with the xla topic, visit your repo's landing page and select "manage topics."