Skip to content

Deep learning inference nodes for ROS with support for NVIDIA Jetson TX1/TX2/Xavier and TensorRT

Notifications You must be signed in to change notification settings

borongyuan/ros_deep_learning

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

42 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ros_deep_learning

This repo contains deep learning inference nodes for ROS with support for Jetson Nano/TX1/TX2/Xavier and TensorRT.

The nodes use the image recognition, object detection, and semantic segmentation DNN's from the jetson-inference library and NVIDIA Hello AI World tutorial, which come with several built-in pretrained networks for classification, detection, and segmentation and the ability to load customized user-trained models.

ROS Melodic (for JetPack 4.2 and Ubuntu 18.04) is recommended, but ROS Kinetic (JetPack 3.3 and Ubuntu 16.04 on TX1/TX2) should work as well. ROS Melodic is supported on Nano/TX1/TX2/Xavier, while Kinetic runs on TX1/TX2 only.

Table of Contents

Installation

First, install the latest JetPack on your Jetson (JetPack 4.2.2 for ROS Melodic or JetPack 3.3 for ROS Kinetic on TX1/TX2).

Then, follow the installation steps below to install the needed components on your Jetson:

jetson-inference

These ROS nodes use the DNN objects from the jetson-inference project (aka Hello AI World). To build and install it, see this page or run the commands below:

$ cd ~
$ sudo apt-get install git cmake
$ git clone --recursive https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ mkdir build
$ cd build
$ cmake ../
$ make
$ sudo make install

Before proceeding, it's worthwhile to test that jetson-inference is working properly on your system by following this step of the Hello AI World tutorial:

ROS Core

Install the ros-melodic-ros-base or ros-kinetic-ros-base package on your Jetson following these directions:

Depending on which version of ROS you're using, install some additional dependencies:

ROS Melodic

$ sudo apt-get install ros-melodic-image-transport
$ sudo apt-get install ros-melodic-image-publisher
$ sudo apt-get install ros-melodic-vision-msgs

ROS Kinetic

$ sudo apt-get install ros-kinetic-image-transport
$ sudo apt-get install ros-kinetic-image-publisher
$ sudo apt-get install ros-kinetic-vision-msgs

Catkin Workspace

Then, create a Catkin workspace (~/catkin_ws) using these steps:
http://wiki.ros.org/ROS/Tutorials/InstallingandConfiguringROSEnvironment#Create_a_ROS_Workspace

ros_deep_learning

Next, navigate into your Catkin workspace and clone and build ros_deep_learning:

$ cd ~/catkin_ws/src
$ git clone https://github.com/dusty-nv/ros_deep_learning
$ cd ../
$ catkin_make

The inferencing nodes should now be built and ready to use.

Testing

Before proceeding, make sure that roscore is running first:

$ roscore

imageNet Node

First, to stream some image data for the inferencing node to process, open another terminal and start an image_publisher, which loads a specified image from disk. We tell it to load one of the test images that come with jetson-inference, but you can substitute your own images here as well:

$ rosrun image_publisher image_publisher __name:=image_publisher ~/jetson-inference/data/images/orange_0.jpg

Next, open a new terminal, overlay your Catkin workspace, and start the imagenet node:

$ source ~/catkin_ws/devel/setup.bash
$ rosrun ros_deep_learning imagenet /imagenet/image_in:=/image_publisher/image_raw _model_name:=googlenet

Here, we remap imagenet's image_in input topic to the output of the image_publisher, and tell it to load the GoogleNet model using the node's model_name parameter. See this table for other classification models that you can download and substitute for model_name.

In another terminal, you should be able to verify the vision_msgs/Classification2D message output of the node, which is published to the imagenet/classification topic:

$ rostopic echo /imagenet/classification

detectNet Node

Kill the other nodes you launched above, and start publishing a new image with people in it for the detectnet node to process:

$ rosrun image_publisher image_publisher __name:=image_publisher ~/jetson-inference/data/images/peds-004.jpg 
$ rosrun ros_deep_learning detectnet /detectnet/image_in:=/image_publisher/image_raw _model_name:=pednet

See this table for the built-in detection models available. Here's an example of launching with the model that detects dogs:

$ rosrun image_publisher image_publisher __name:=image_publisher ~/jetson-inference/data/images/dog_0.jpg
$ rosrun ros_deep_learning detectnet /detectnet/image_in:=/image_publisher/image_raw _model_name:=coco-dog

To inspect the vision_msgs/Detection2DArray message output of the node, subscribe to the detectnet/detections topic:

$ rostopic echo /detectnet/detections

About

Deep learning inference nodes for ROS with support for NVIDIA Jetson TX1/TX2/Xavier and TensorRT

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 96.1%
  • CMake 3.9%