Skip to content

pepes97/Optimal-trajectory-generation-in-the-Duckietown-environment

Repository files navigation

Optimal Trajectory Generation in the Duckietown Environment

Introduction

The goal of the project is to implement trajectory generation scheme, the approach consists in reducing the search space by considering the optimization of specific cost functions which yield polynomial trajectories, coupled with a feedback linearization control, to navigate in the Duckietown environment, which consists in a simulator providing customizable city-like scenarios with lanes and intersections. In this setting, the vehicle is controlled using information provided by the monocular camera on top of the vehicle, which is responsible for recognizing the center lane which defines the local Frenet frame. The simulations are in the Duckietown simulator (based on OpenAI Gym) using Python.

Table of Contents

Installation

conda create -n gym-duckietown python=3.8.5
conda activate gym-duckietown
pip install -r requirements.txt

Simulator Environment Python

We simulated an environment with a similar trajectory to the duckietown circuit, we tested the planner, the controller and the unicycle in three cases: without obstacles, with fixed obstacles and with moving obstacles. Not having the camera, we assumed we had a sensor that allowed us to recognize obstacles if they were within the visual range.

Run

Run without obstacles

python tester.py -t planner_full -p --config ./config/config_circle2D.json

Run with fixed obstacles

python tester.py -t planner_obstacles -p --config ./config/config_circle2D.json

Run with moving obstacles

python tester.py -t planner_moving_obstacles -p --config ./config/config_circle2D.json

Duckietown

The information of the environment is unknown, everything is learned through a monocular camera placed on the robot. In our work we extracted the information using two filters, CentralFilter() and LateralFilter() (Implementation see here), respectively for the central yellow line filter and for the lateral white lines.

The processed filters are passed through a Segmentator() (Implementation see here) which processes the filters and finds the relative boundaries of the areas of them.

After that everything is passed to a SemanticMapper() (Implementation see here) which will type the various filters. For the object identified by that filter, the area, the center with respect to the position of the robot and an inertial tensor are calculated, these elements allow us to establish thresholds for typing the object. This means that we will determine if an object that is detected, for example by the yellow filter, is a yellow line, a duck or an unknown one. A list of the objects and the polyfits of the yellow and white lines is returned.

The objects are given in input to the ObstacleTracker() (Implementation see here) which establishes, based on the typing of the objects themselves, whether they are obstacles or simply lines.

Run Duckietown

We analyzed two scenarios: environment without obstacles and with obstacles.

Run without obstacles

The Duckietown environment is loop_empty.

python tester.py -t dt_mapper_semantic

Run with obstacles

The Duckietown environment is modified version of loop_obstacles.

python tester.py -t dt_obstacles

Credits

References

Releases

No releases published

Packages

No packages published

Languages