Streamlit Dockerized Computer Vision App with Triton Inference Server and PostgreSQL database
-
Updated
May 16, 2024 - Python
Streamlit Dockerized Computer Vision App with Triton Inference Server and PostgreSQL database
Serving YOLOv5 Segmentation Model with Amazon EC2 Inf1
Triton inference server with Python backend and transformers
This repository is a code sample to serve Large Language Models (LLM) on a Google Kubernetes Engine (GKE) cluster with GPUs running NVIDIA Triton Inference Server with FasterTransformer backend.
Microservices with HTTP, Triton Inference Server, FastApi and Docker-compose
QuickStart for Deploying a Basic Model on the Triton Inference Server
Heterogeneous System ML Pipeline Scheduling Framework with Triton Inference Server as Backend
Cassandra plugin for NVIDIA DALI
This repository contains the content for a proof of concept implementation of computer vision systems in industry. The project explores scalability and performance using the NVIDIA ecosystem, aiming to create an example scaffold for implementing a system accessible to non-technical users.
A library for interfacing with Triton.
An image to text model/pipeline using VIT and Transformers and deployment using Nvidia's Pytrition and Streamlit app.
Search Engine on Shopee apply Image Search, Full-text Search, Auto-complete
Custom Yolov8x-cls edge model deployment and training to classify trash vs recycling.
The Sumen model integrates with Triton Inference Server
A complete containerized setup for Triton inference server and its python client using a realistic pre-trained XGBoost classifier model.
Example string processing pipeline on Triton Inference Server
NVIDIA DLI "트랜스포머 기반 자연어 처리 애플리케이션 구축" 워크숍 레포지토리
Run Multiple Models on the Same GPU with Amazon SageMaker Multi-Model Endpoints Powered by NVIDIA Triton Inference Server. A Java client is also provided.
An easy classification implement to explain how triton work
Add a description, image, and links to the triton-inference-server topic page so that developers can more easily learn about it.
To associate your repository with the triton-inference-server topic, visit your repo's landing page and select "manage topics."