Skip to content

Use Fully Convolutional Nets to identify segments of image as drivable road

Notifications You must be signed in to change notification settings

farrukh-x2/Deep-Learn-Semantic-Segmentation

Repository files navigation

Semantic Segmentation on KITTI Road Dataset

Deep Learning for Computer Vision


The Project

In this project, Fully Convolutional Network (FCN) are used to label the pixels of a road in images/video.

Since we want to maintain spatial dimensions of the image we use FCN instead of deep convolutional network. The latter excels at extracting meaningful features from the input and doesn't maintain the original dimensions.

The FCN up-samples the output from VGG16 using 1x1 Convolutions. We also add skip connections to make up for the information loss from down-sampling of the encoder. This way the network can use information from multiple resolutions.

Overall FCN uses 3 techniques:

  1. 1 x 1 Convolutions
  2. Transposed Convolutions
  3. Skip connections

The results are obvious in the video above

Parameters

After tunning the following parameters were used for the best results:

  • Epochs : 15
  • Batch Size : 8
  • Learning Rate : 0.00005

Trained on AWS using the g3.4x instance

Requirements

Frameworks and Packages

The following packages are required:

Dataset

Download the Kitti Road dataset from here. Extract the dataset in the data folder. This will create the folder data_road with all the training a test images.

Use

Run

Run the following command to run the project:

python main.py

Note If running this in Jupyter Notebook system messages, such as those regarding test status, may appear in the terminal rather than the notebook.

From Self-Driving Car Engineer Nanodegree Program, Starter Code Provided by Udacity

About

Use Fully Convolutional Nets to identify segments of image as drivable road

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages