🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
-
Updated
May 13, 2024 - Python
PyTorch is an open source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook's AI Research lab.
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
A personal research and development (R&D) lab that facilitates the sharing of knowledge.
several types of attention modules written in PyTorch
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
A retargetable MLIR-based machine learning compiler and runtime toolkit.
Deep Learning for humans
Visualizer for neural network, deep learning and machine learning models
ncnn is a high-performance neural network inference framework optimized for the mobile platform
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
a library so simple you will learn Within An Hour
Burn is a new comprehensive dynamic Deep Learning Framework built using Rust with extreme flexibility, compute efficiency and portability as its primary goals.
A Fundamental End-to-End Speech Recognition Toolkit and Open Source SOTA Pretrained Models. |语音识别工具包,包含丰富的性能优越的开源预训练模型,支持语音识别、语音端点检测、文本后处理等,具备服务部署能力。
Official implementation of "Time Evidence Fusion Network: Multi-source View in Long-Term Time Series Forecasting"
Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.
A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
A high-throughput and memory-efficient inference and serving engine for LLMs
Code I used for my YouTube videos
you interest to mlops? here's your encyclopedic repo 👊
pytorch下基于transformer / LSTM模型的彩票预测
Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, Baichuan, Mixtral, Gemma, etc.) on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max). A PyTorch LLM library that seamlessly integrates with llama.cpp, Ollama, HuggingFace, LangChain, LlamaIndex, DeepSpeed, vLLM, FastChat, etc.
Created by Facebook's AI Research lab (FAIR)
Released September 2016
Latest release about 1 month ago