32 GB SD card image for Jetson Nano based on Ubuntu 20 and compatible Yolov8 Ultralytics library
-
Updated
Jan 19, 2024
32 GB SD card image for Jetson Nano based on Ubuntu 20 and compatible Yolov8 Ultralytics library
C++ implementation of An Improved Association Pipeline for Multi-Person Tracking
C++/C TensorRT Inference Example for models created with Pytorch/JAX/TF
不同backend的模型转换与推理代码
Real-time human tracking and 3D pose estimation with TensorRT (for Windows)
YOLOX TensorRT object detection
Rust GRPC server for face recognition, face detection and face alignment using TensorRT, Cuda on JetPack SDK (Jetson Nano, Jetson Xavier NX)
Convert ONNX models to TensorRT engines and run inference in containerized environments
This project is a notebook of learning TensorRT.
A cross lingual toxicity detection model that works for over 100 languages. Powered the mighty XLM-R model, the model performance is state of the art.
Inference code of `ogata-lab/eipl`. Control robots with machine learning models on edge computer.
This is an mnist example of how to transfer a .pt file to .onnx, then transfer .onnx file to .trt file.
A lightweight, high-performance deep learning inference tool.
In this work we applied multilingual zero-shot transfer concept for the task of toxic comments detection. This concept allows a model trained only on a single-language dataset to work in arbitrary language, even low-resource.
Dolphin is a python toolkit meant to speed up inference of TensorRT by providing CUDA-Accelerated processing.
Generating tensorrt model using onnx
Getting started with TensorRT-LLM using BLOOM as a case study
Export (from Onnx) and Inference TensorRT engine with C++.
Add a description, image, and links to the tensorrt-inference topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt-inference topic, visit your repo's landing page and select "manage topics."