Skip to content

FSTDriverless/FSTImplementation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Formula Student Technion Driverless - Implementation

This is the repository for the real-world implementation of the paper Explorations and Lessons Learned in Building an Autonomous Formula SAE Car from Simulations (SIMULTECH 2019 conference)

This repository introduces the procedure of implementing a model, trained by FSTDriverless/AirSim using Keras, on Nvidia Jetson TX2.

This procedure is composed of a few steps:

  1. Freeze a trained Keras model (TensorFlow backend).
  2. Convert the frozen model to a TensorFlow model.
  3. Load the TensorFlow model and perform inference using an IDS camera with the TX2.

Our purpose was to send the model's output to an Arduino module. In case you don't need this option, you can remove any dependency on serial port communication.

drone

Prerequisites

If you already has a frozen TensorFlow model, you need to use only the Jetson's prerequisites.

x86 PC contains:

  • Operating system: Windows 10
  • GPU: Nvidia GTX 1080 or higher (recommended)
  • Development: CUDA 9.0 and python 3.5.
  • Python libraries: Keras 2.1.2, TensorFlow 1.6.0.
  • Note: Newer versions of keras or tensorflow are recommended but can cause syntax errors.

Jetson TX2 contains:

What's inside

What's new:

  • We now share our TF frozen trained model. The model can be found in Models folder.

This repository contains two code files:

freezing_keras_to_tf.py
The main actions in this script are uploading a Keras model, freezing it and saving it as a TensorFlow model.
The code rely on having an adjacent folder named "models" contains a keras model named "model.h5". The output model will be stored in this folder as well.

Note: you'll have to understand what is your output node. In our case, it was the last activation function in our model, so our output node was "output/Sigmoid".
To print the list of your model nodes in Keras, add the following command:

[print(n.name) for n in tf.get_default_graph().as_graph_def().node]  

inference.py
Using a frozen Tensorflow model, this script gets images from the IDS camera, predict the corresponding steering angle using the model and sending it to the Arduino module.
The code rely on having a model named "model_tf.pb" in the same folder.
The output in our case is a prediction in a range of [0,255].

The code implements an inference for PilotNet architecture. Therefore, we adjusted the image to adapt the network. It's recommended to read the code carefully.

Citing

If this repository helped you in your research, please consider citing:

@article{zadok2019explorations,
  title={Explorations and Lessons Learned in Building an Autonomous Formula SAE Car from Simulations},
  author={Zadok, Dean and Hirshberg, Tom and Biran, Amir and Radinsky, Kira and Kapoor, Ashish},
  journal={arXiv preprint arXiv:1905.05940},
  year={2019}
}

Formula Student Technion team

Tom Hirshberg and Dean Zadok.

Acknowledgments

We would like to thank our advisors: Dr. Kira Radinsky, Dr. Ashish Kapoor, Boaz Sternfeld and David Dovrat.
Thanks to the Intelligent Systems Lab (ISL) in the Technion for their support.

About

Formula Student Technion Driverless - Implementation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages