Deep Learning API and Server in C++14 support for Caffe, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
-
Updated
Jun 4, 2024 - C++
Deep Learning API and Server in C++14 support for Caffe, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE
FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation (ICRA 2021)
BEVDet implemented by TensorRT, C++; Achieving real-time performance on Orin
Deploy stable diffusion model with onnx/tenorrt + tritonserver
NVIDIA-accelerated DNN model inference ROS 2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU
Yolov5 TensorRT Implementations
this is a tensorrt version unet, inspired by tensorrtx
you can use dbnet to detect word or bar code,Knowledge Distillation is provided,also python tensorrt inference is provided.
Using TensorRT for Inference Model Deployment.
Production-ready YOLO8 Segmentation deployment with TensorRT and ONNX support for CPU/GPU, including AI model integration guidance for Unitlab Annotate.
VitPose without MMCV dependencies
3d object detection model smoke c++ inference code
Export (from Onnx) and Inference TensorRT engine with C++.
Convert yolo models to ONNX, TensorRT add NMSBatched.
ComfyUI Depth Anything Tensorrt Custom Node (up to 5x faster), licensed under CC BY-NC-SA 4.0
The real-time Instance Segmentation Algorithm SparseInst running on TensoRT and ONNX
C++ TensorRT Implementation of NanoSAM
Advance inference performance using TensorRT for CRAFT Text detection. Implemented modules to convert Pytorch -> ONNX -> TensorRT, with dynamic shapes (multi-size input) inference.
This repo contains model compression(using TensorRT) and documentation of running various deep learning models on NVIDIA Jetson Orin, Nano (aarch64 architectures)
Add a description, image, and links to the tensorrt-inference topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt-inference topic, visit your repo's landing page and select "manage topics."