Skip to content

This repository contains a Master's Thesis on enhancing LiDAR-based 3D Object Detection in autonomous vehicles using synthetic data. The research explores the use of the Ansys AVxcelerate Sensors Simulator (AVX) to create synthetic point clouds. It includes a hybrid training approach, experimental design, results, and related scripts.

License

Notifications You must be signed in to change notification settings

aydnzn/Enhancing-LiDAR-based-3D-Object-Detection

Repository files navigation

Enhancing LiDAR-based 3D Object Detection through Simulation

This repository is dedicated to the Master's Thesis in Electrical Engineering and Information Technology at the Institute for Measurement Systems and Sensor Technology, Technische Universität München.

This Master's thesis investigates the enhancement of LiDAR-based 3D Object Detection algorithms for autonomous vehicles, using synthetic point cloud data generated from the Ansys AVxcelerate CarMaker Co-Simulation process. The study focuses on integrating and aligning synthetic and real-world data, and applying fine-tuning techniques within the Pointpillars network to optimize the model. The research reveals challenges in ensuring model generalization across different data types, especially when identifying complex entities like pedestrians. The study indicates that a balanced combination of synthetic and real-world data yields promising results. Additionally, a hybrid training approach, consisting of initial pre-training with synthetic data followed by fine-tuning with real-world data, exhibits potential, particularly under conditions of real-world data scarcity. This study thus provides valuable insights to guide future improvements in the training and testing methodologies for autonomous driving systems.

Problem Statement

Despite the accuracy of depth perception provided by LiDAR technology, training deep learning algorithms for LiDAR-based object detection poses a significant challenge due to the scarcity of large-scale annotated data.

Synthetic data generation through simulation software is a potential solution, but often fails to accurately mimic real-world sensory data due to a reliance on handcrafted 3D assets and simplified physics, creating a 'synthetic-to-real gap'. Furthermore, models trained solely on synthetic data may not perform well in real-world scenarios due to data distribution differences.

As part of this research, I'll be investigating how to bridge this gap using the Ansys AVxcelerate Sensors Simulator (AVX). AVX offers a virtual testing environment for sensors used in autonomous vehicles, potentially helping bridge the synthetic-to-real gap. However, the accuracy of this simulator in replicating real-world data and its impact on the performance of algorithms needs to be critically evaluated.

Objectives

The main objectives are outlined below.

  • Generate a replica of the renowned KITTI dataset using synthetic data from the Velodyne HDL-64E LiDAR model in Ansys AVxcelerate Sensors Simulator (AVX), in co-simulation with CarMaker software.
  • Apply bounding box extraction algorithms to synthetic point clouds and create KITTI-compatible labels.
  • Investigate the potential of synthetic data in enhancing the performance of object detection algorithms.
  • Compare performance metrics of models trained on diverse data types (synthetic versus real-world).
  • Evaluate the influence of modifying the ratio of synthetic to real-world data on the performance.
  • Assess the viability and efficacy of a hybrid training strategy involving pre-training on synthetic data with subsequent fine-tuning on real-world data.
  • Analyze the impact of pre-training duration on the optimization of model parameters.
  • Conduct a detailed qualitative analysis of the trained networks.

Through these objectives, I aim to provide valuable insights into the benefits and challenges of using synthetic data in training object detection algorithms for autonomous vehicles.

Contents

  • Thesis: This is my Master's thesis PDF document.
  • Methodology: This section outlines the research methodology, emphasizing the LiDAR sensor modeling. It provides a detailed explanation of the Ansys AVxcelerate CarMaker Co-Simulation process, the processing of simulation outputs, and how simulated scenarios are scaled.
  • Experimental_Design: This section describes the experimental design, specifying the datasets used, network settings, evaluation metrics, and the adaptation of KITTI difficulty levels for synthetic dataset evaluation. It also presents the different experiments carried out.
  • Results: This section delves into the results from the experiments. It provides a quantitative analysis of the results from each experiment, along with an assessment of pre-training and training duration impact on the Average Precision for 3D object detection (AP 3D) scores. It also includes a qualitative analysis on the AVX test set and KITTI test set.
  • Python_scripts: These are the Python scripts required to process the synthetic point clouds to create the KITTI labels, calibration files, etc. See the README.md for usage instructions.
  • VM_scripts: These are scripts required for training, evaluation, data preparation, and point cloud visualization and need to be transferred to the virtual machine. Refer to the README.md for usage instructions.
  • cfgs: These are configuration files required for training and evaluation according to OpenPCDet.
  • kitti_models: These are the Pointpillars network models required for training and evaluation, according to OpenPCDet.
  • docs: These are some necessary documents for the other README's I have created.
  • RUN: This README explains how to run the whole framework. It includes creating the synthetic point clouds, their labels, preparing them for training, conducting training and evaluation, and visualization instructions.

About

This repository contains a Master's Thesis on enhancing LiDAR-based 3D Object Detection in autonomous vehicles using synthetic data. The research explores the use of the Ansys AVxcelerate Sensors Simulator (AVX) to create synthetic point clouds. It includes a hybrid training approach, experimental design, results, and related scripts.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published