Skip to content

victor-iyi/self-driving-simulation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

59 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Self Driving Simulation

A self-driving car in a simulated environment. Explore various state-of-the-art methods of autonomous self-driving car in a fun visual format.

  • Built in Unity3D (free game making engine).
  • Add new tracks, change prebuilt scripts like gravity acceleration easily.

Jungle Track

Download Links: Linux, Mac, Windows

Table Of Content

Setup

All required dependencies are neatly packed in the requirements.txt file.

NOTE: This project was developed with Python 3.6.5 therefore use the appropriate Python interpreter (i.e. python3 instead of python which could most likely be Python 2.7 & pip3 instead of pip).

> pip install --upgrade pip

Then you probably want to work from your local PC:

Start by cloning the project from github:

> git clone https://github.com/victor-iyiola/self-driving-simulation.git
> cd self-driving-simulation

or:

You can download the .zip project files here and extract the project files.

> cd /path/to/self-driving-simulation

Install these requirements:

> pip install --upgrade -r requirements.txt

Usage

How the Simulator works

  • Records images from center, left, and right cameras w/ associated steering angle, speed, throttle and brake.
  • Saves to driving_log.csv
  • Ideally you have a joystick, but keyboard works too.

Run the pre-trained model

To drive, run simulator Autonomous Mode (click Autonomous Mode in main Menu), then run drive.py as follows:

> python drive.py model-005.h5

Train the model

To train, generate training data (press R while in Training Mode) with the Simulator and save recordings to path/to/self_driving_car/data/.

> python3 model.py

This will generate a file model-{epoch}.h5 whenever the performance in the epoch is better than the previous best. For example, the first epoch will generate a file called model-000.h5.

About the Model

Training Mode - Behavioral cloning

A 9 layer convolutional network, based off of Nvidia's End-to-end learning for self driving car paper. 72 hours of driving data was collected in all sorts of conditions from human drivers

Hardware design

  • 3 cameras
  • The steering command is obtained by tapping into the vehicle’s Controller Area Network (CAN) bus.
  • Nvidia's Drive PX onboard computer with GPUs

In order to make the system independent of the car geometry, the steering command is 1/r, where r is the turning radius in meters. 1/r was used instead of r to prevent a singularity when driving straight (the turning radius for driving straight is infinity). 1/r smoothly transitions through zero from left turns (negative values) to right turns (positive values).

Software Design (supervised learning)

Images are fed into a CNN that then computes a proposed steering command. The proposed command is compared to the desired command for that image, and the weights of the CNN are adjusted to bring the CNN output closer to the desired output. The weight adjustment is accomplished using back propagation

Eventually, it generated steering commands using just a single camera.

Credits

Contribution

This project is opened under MIT 2.0 license.

About

A self-driving car in a simulated environment. Explore various state-of-the-art methods of autonomous self-driving car in a fun visual format.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages