Run yolo V8 models as tensorrt engines natively for maximum performance 🏎️💨
-
Updated
Apr 29, 2024 - Python
Run yolo V8 models as tensorrt engines natively for maximum performance 🏎️💨
A miniature model of a self-driving car using deep learning
Based on TensorRT v8.2, build network for YOLOv5-v5.0 by myself, speed up YOLOv5-v5.0 inferencing
A lightweight C++ implementation of YoloV8 running on NVIDIAs TensorRT engine
This is an mnist example of how to transfer a .pt file to .onnx, then transfer .onnx file to .trt file.
Search Engine on Shopee apply Image Search, Full-text Search, Auto-complete
Convenient Convert CRAFT Text detection pretrain Pytorch model into TensorRT engine directly, without ONNX step between
Base on tensorrt version 8.2.4, compare inference speed for different tensorrt api.
Magface Triton Inferece Server Using Tensorrt
C++ TensorRT Implementation of NanoSAM
Transform any wall to an intelligent whiteboard
jetson nano 部署 yolov5+TensorRT+Deepstream
Using TensorRT for Inference Model Deployment.
A PyTorch implementation of siamese networks using backbone from torchvision.models, with support for TensorRT inference.
This YOLOv5🚀😊 GUI road sign system uses MySQL💽, PyQt5🎨, PyTorch, CSS🌈. It has modules for login🔑, YOLOv5 setup📋, sign recognition🔍, database💾, and image processing🖼️. It supports diverse inputs, model switching, and enhancements like mosaic and mixup📈.
🔥🔥🔥🔥🔥🔥Docker NVIDIA Docker2 YOLOV5 YOLOX YOLO Deepsort TensorRT ROS Deepstream Jetson Nano TX2 NX for High-performance deployment(高性能部署)
Add a description, image, and links to the tensorrt-engine topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt-engine topic, visit your repo's landing page and select "manage topics."