Skip to content

Latest commit

 

History

History
46 lines (29 loc) · 3.63 KB

jetpack-setup-2.md

File metadata and controls

46 lines (29 loc) · 3.63 KB

Back | Next | Contents
System Setup

Setting up Jetson with JetPack

note: if your Jetson has already been flashed with the JetPack SD card image or SDK Manager, you can skip this step and continue to Running the Docker Container or Building the Project

NVIDIA JetPack is a comprehensive SDK for Jetson for both developing and deploying AI and computer vision applications. JetPack simplifies installation of the OS and drivers and includes the L4T Linux kernel, CUDA Toolkit, cuDNN, TensorRT, and more.

Before attempting to use the Docker container or build the repo, make sure that your Jetson has been setup with the latest version of JetPack.

Jetson Nano, Orin Nano, and Xavier NX

The recommended install method for the Jetson developer kits with removable microSD storage is to flash the latest SD card image.

It comes pre-populated with the JetPack components already installed and can be flashed from a Windows, Mac, or Linux PC. If you haven't already, follow the Getting Started guide for your respective Jetson to flash the SD card image and setup your device:

Jetson TX1/TX2, AGX Xavier, and AGX Orin

Other Jetson's should be flashed by downloading the NVIDIA SDK Manager to a host PC running Ubuntu x86_64. Connect the Micro-USB or USB-C port to your host PC and enter the device into Recovery Mode before proceeding:

For more details, please refer to the NVIDIA SDK Manager Documentation and Install Jetson Software page.

Getting the Project

There are two ways to use the jetson-inference project:

Using the container is recommended initially to get up & running as fast as possible (and the container already includes PyTorch installed), however if you are more comfortable with native development then compiling the project yourself is not complicated either.

Next | Building the Project from Source
Back | Overview

© 2016-2019 NVIDIA | Table of Contents