A REST API for Caffe using Docker and Go
-
Updated
Jul 20, 2018 - C++
A REST API for Caffe using Docker and Go
Run your own production inference code with Sagemaker
An AI-powered mobile crop advisory app for farmers, gardeners that can provide information about crops using an image taken by the user. This supports 10 crops and 37 kinds of crop diseases. The AI model is a ResNet network that has been fine-tuned using crop images that were collected by web-scraping from Google Images and Plant-Village Dataset.
Inference Server Implementation from Scratch for Machine Learning Models
Session Based Real-time Hotel Recommendation Web Application
Orkhon: ML Inference Framework and Server Runtime
Serve pytorch inference requests using batching with redis for faster performance.
Advanced inference pipeline using NVIDIA Triton Inference Server for CRAFT Text detection (Pytorch), included converter from Pytorch -> ONNX -> TensorRT, Inference pipelines (TensorRT, Triton server - multi-format). Supported model format for Triton inference: TensorRT engine, Torchscript, ONNX
Bundle of Repositories that power up all the Crop Prediction Applications
K3ai is a lightweight, fully automated, AI infrastructure-in-a-box solution that allows anyone to experiment quickly with Kubeflow pipelines. K3ai is perfect for anything from Edge to laptops.
This is a repository for an nocode object detection inference API using the Yolov3 and Yolov4 Darknet framework.
This is a repository for an nocode object detection inference API using the Yolov4 and Yolov3 Opencv.
This is a repository for an object detection inference API using the Tensorflow framework.
An example of using Redis + RedisAI for a microservice that predicts consumer loan probabilities using Redis as a feature and model store and RedisAI as an inference server.
Client/Server system to perform distributed inference on high load systems.
Python + Inference - Model Deployment library in Python. Simplest model inference server ever.
A networked inference server for Whisper so you don't have to keep waiting for the audio model to reload for the x-hunderdth time.
Add a description, image, and links to the inference-server topic page so that developers can more easily learn about it.
To associate your repository with the inference-server topic, visit your repo's landing page and select "manage topics."