Skip to content

WNormanTPN/self-driving-car

Repository files navigation

🚗 Self-Driving Car Project 🚗

Demo GIF



Table of Contents

  1. Introduction
  2. Prerequisites
  3. Data Preparation
  4. Model Architecture
  5. Usage


Introduction

This deep learning model uses Convolutional Neural Networks (CNN) to predict steering angles and speed for the self-driving car simulation: Udacity Self-Driving Car Simulator. The model takes input from the car's 3 cameras (left, center, and right) and makes predictions for the car's movement.

  • Car Movement: The car can move left (←), right (→), accelerate (↑), and decelerate (↓).
  • Camera Setup: The car is equipped with three cameras (left, center, right).


Prerequisites



Data Preparation

Manual Data Collection:

  1. Open the simulator and select the Training Mode.
  1. Click on Record, choose a folder to save the data, and drive the car for about 10 minutes.
    • Driving for 10 minutes will yield around 18,000 images (6,000 images from each camera).

Download Pre-recorded and Preprocessed Data:


After data collection, you will have a driving_log.csv file, which contains information about the collected data:

Image Path Center Camera Image Left Camera Image Right Camera Image Steering Angle Throttle Brake


Data Preprocessing

For Manual Data Collection Only.

Follow Data Preprocessing



Model Architecture



Usage

Setup Environment

  • Create a virtual environment:
python -m venv .venv
source .venv/bin/activate  # On Windows use .venv\Scripts\activate
  • Install required packages from requirements.txt:
pip install -r requirements.txt

Training Model

  • Set data_dir in training.py to folder holding both preprocessed driving_log.csv and IMG. For example:
data_dir = 'data/processed/track1'
  • Run training.py:
python training.py

Autonomous Run

Once the model is trained, you can use it to predict the car's movements based on the camera input in the simulator.

  • First, launch the Simulator App and select Autonomous mode; at this point, the car will remain stationary.

  • Next, run the following command to connect to the Simulator (without recording):

python main.py <model path>

Or (with recording enabled):

python main.py <model path> <image folder>

Image to Video Conversion

  • To convert the captured images into a video, run the following command:
python video.py <image folder> [--fps]

The --fps argument defaults to 60 if not specified.

About

The first methodical Data Science project using CNN

Resources

License

Stars

Watchers

Forks