Skip to content

This repo implemented the core technology of the self-driving car, including the basic concepts such as path tracking, path planning, SLAM, etc., and deep learning techniques such as computer vision and reinforcement learning. Finally, we practiced with NVIDIA JetBot in both the simulator and the real world.

License

windsuzu/Robotic-Navigation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Contributors MIT License PR Welcome Author LinkedIn


Logo

Robot Navigation

This repo implemented the core technology of the self-driving car, including the basic concepts such as path tracking, path planning, SLAM, etc., and deep learning techniques such as computer vision and reinforcement learning. Finally, we practiced with NVIDIA JetBot in both the simulator and the real world.
View Demo · Report Bug · Request Feature

Table of Contents

About

本專案為成功大學 Robotic Navigation and Exploration (CS562000) 的課程整理;目標為了解自駕車原理,最終能夠使用影像辨識技術操縱 NVIDIA JetBot。專案細分為 6 個實作的 lab,以及最終 NVIDIA JetBot 的模擬與真實世界模型。

The key features of Robotic Navigation:

  • Kinetic Model (WMR Model, Bicycle Model)
  • Path Tracking (PID Control, Pure-Pursuit Control, Stanley Control)
  • Path Planning (A* Algorithm, RRT Algorithm, RRT* Algorithm)
  • SLAM (Fast-SLAM, ORB-SLAM)
  • Semantic Segmentation (Encoder-Decoder, FCN, UNet, PSPNet)
  • Reinforcement Learning (DDPG)
Built With
  • Python 3
  • OpenCV 2
  • Numpy
  • PyTorch

Lab

Lab 1 - Kinematic Model & Path Tracking Control

在 Lab 1 我們要完成兩種 Kinematic Model 的 update 程式碼,分別是 Bicycle Model 以及 WMR (wheeled mobile robot) Model。 接著,基於這兩種模型,完成三種路線追蹤演算法,分別為 PID ControlPure Pursuit Control,以及 Stanley Control

WMR Model (WASD Control) WMR PID WMR Pure Pursuit
Bicycle Model (WASD Control) Bicycle Pure Pursuit Bicycle Stanley

Lab 2 - Path Planning

在 Lab 2 我們要實作路徑規劃 (Path Planning) 的演算法,這些演算法的目標通常為找到起點與終點的最佳路徑;我們總共有 3 個需要實作的算法,分別為 A* algorithmRRT algorithm,以及 RRT* algorithm

A* algorithm RRT algorithm RRT* algorithm

Lab 3 - SLAM

在 Lab 3 我們要實作 Fast-SLAM (simultaneous localization and mapping),一種 SLAM 的變形。所謂 SLAM 是一種概念,通過車子感測器所偵測到的地標特徵,來定位車子自身的位置和狀態,一次達到定位與地圖建構的目標。

  • 題目要求與解釋: lab3.pdf
  • 完整程式碼: lab3/program/
  • 成果展示 (點擊截圖查看程式碼):
Fast SLAM

Lab 4 - ORB-SLAM on JetBot

在 Lab 4 我們要在 JetBot 上實作 ORB-SLAM 2,利用 JetBot 的照相機來偵測現實世界的地標座標。要達成該目標,我們必須將 ORB-SLAM library (C++) 與 JetBot 上的 Python 環境進行綁定;接著使用 JetBot 的照相機收集棋盤照片,對 JetBot 進行校正 (Camera Calibration)。

  • 題目要求與解釋: lab4.pdf
  • 完整程式碼: lab4/program/
  • 成果展示 (點擊截圖查看程式碼):
ORB-SLAM on JetBot

Lab 5 - Semantic Segmentation

在 Lab 5 我們要在 PyTorch 上實作影像分割 (Image Segmentation) 中的語義分割 (Semantic Segmentation),也就是對該張圖片的每一個像素點都進行分類。我們使用四種常見的模型來實作,分別為 Encoder-DecoderFully Convolution Network (FCN)UNet,以及 PSPNet

  • 題目要求與解釋: lab5.pdf
  • 完整程式碼: lab5/program/
  • 成果展示 (點擊截圖查看程式碼):
Encoder-Decoder FCN
UNet PSPNet

Lab 6 - Model-Free Reinforcement Learning for Mapless Navigation

在 Lab 6 我們要使用強化學習 (Reinforcement Learning,RL) 來取代所有在 lab1、lab2、lab3 執行的工作。原本為了讓車子從任意起點走到任意終點,需要以 SLAM 等演算法建立地圖,接著執行路徑規劃以及路徑追蹤才能完成;但強化學習能夠跳過這些步驟,以獎勵方式來學習達成目標。在 Lab 6 中所使用的 RL 為 DDPG (Deep Deterministic Policy Gradient)

  • 題目要求與解釋: lab6.pdf
  • 完整程式碼: lab6/program/
  • 成果展示 (點擊截圖查看程式碼):
Training Loop 50 Training Loop 450 Training Loop 750

JetBot

對 JetBot 的實作將依環境分為 Unity 模擬與真實世界。在這兩個環境中,我們都需要達成三項任務: 追蹤紅線 (Tracking)、躲避障礙物 (Avoidance),以及終點線停車 (Parking)。

Simulation

Simulation - Tracking Simulation - Avoidance

Real World

Real World - Tracking Real World - Avoidance

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Reach out to the maintainer at one of the following places:

About

This repo implemented the core technology of the self-driving car, including the basic concepts such as path tracking, path planning, SLAM, etc., and deep learning techniques such as computer vision and reinforcement learning. Finally, we practiced with NVIDIA JetBot in both the simulator and the real world.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published