Skip to content

Bobo-y/triton-backend-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This is a demo of triton-inference-server custom backend. This backend get images path from http/grpc then return resized and normlized tensor. For resize and norm image, you can choose opencv or cuda kernel.

  1. build custom backend
sudo docker build -t serving-triton .
sudo docker run --gpus all -it -v /home/:/home/ -p 8000:8000 -p 8001:8001 -p 8002:8002 serving-triton  /bin/bash

cpu version

# cd image_process dir
# in docker
mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX:PATH=`pwd`/install ..
make install -j8

cuda version, for cuda kernel, your docker cuda version should lower than your host machine

# cd image_process dir
# in docker
mv image_process.cc image_process.cc.bak
mv image_process_cuda.cc  image_process.cc
mkdir build
cd build
cmake -DCMAKE_INSTALL_PREFIX:PATH=`pwd`/install ..
make install -j8
  1. start custom backend server
# in docker
# cd /opt/tritonserver
cp /home/XXX/backend_demo/image_process/build/libimageprocess.so /opt/tritonserver/backends/image_process/
mv /opt/tritonserver/backends/image_process/libimageprocess.so /opt/tritonserver/backends/image_process/libtriton_image_process.so
./bin/tritonserver --model-repository=/models
  1. client outside docker
# please use absolute path of test imgs
python client.py