Tensors and Dynamic neural networks in Python with strong GPU acceleration
-
Updated
May 13, 2024 - Python
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals.
A Machine Learning framework from scratch in Pure Mojo 🔥
⚡️Optimizing einsum functions in NumPy, Tensorflow, Dask, and more with contraction order optimization.
Open deep learning compiler stack for cpu, gpu and specialized accelerators
Biblioteca para manipulação de modelos de Redes Neurais
Pytorch_Dart is a dart wrapper for Libtorch,striving to provide an experience identical to PyTorch. You can use it as an alternative to Numpy in your Dart/Flutter projects.
A fast, ergonomic and portable tensor library in Nim with a deep learning focus for CPU, GPU and embedded devices via OpenMP, Cuda and OpenCL backends
PyTorch but for GigaChads, GigaTorch.
A generic, composable multi-dimensional array library.
On-device AI across mobile, embedded and edge for PyTorch
Novigrad is an automatic differentiation engine with a forward mode and a backward mode. It aims to be a minimalistic neural network framework written in Rust. It's a work-in-progress.
Flexible and powerful tensor operations for readable and reliable code (for pytorch, jax, TF and others)
PHP extension for efficient scientific computing and array manipulation with GPU support
Tensor-based Multiple Canonical Correlation Analysis
Automatic differentiation for tensor operations
Add a description, image, and links to the tensor topic page so that developers can more easily learn about it.
To associate your repository with the tensor topic, visit your repo's landing page and select "manage topics."