lane following using reinforcment
Paper: here
source code :here
documentation: here
We propose the use of Duckietown, an open-source platform for autonomy education and research, as a hardware implementation for our lane-following task in reinforcement learning. Duckietown includes autonomous vehicles called "Duckiebots" that are equipped with powerful onboard computers, such as Raspberry Pi, and a variety of sensors, including cameras and odometry sensors. The Duckiebots are capable of performing complex single- robot and multi-robot behaviors, making them an ideal platform for autonomy education and research. we start with The most basic task of the Duckiebot is lane following.
- This task is implemented using a realistic computer vision pipeline that contains these steps:
- Capture image
- Filter noise (Threshold/Masking)
- Detect line lanes 4.lane centroid is calculated, and error is deduced.
- input these calculation in reinforcment learning to start learning and take the next best action
We need to solve this problem by using reinforcement learning. OpenAI ROS provide a way to build a reinforcment learning algorithm by using ROS to control the agent(i.e car) To make the agent train by itself we will build some parts:
- Gazebo environment
- Robot environment
- Task environment
- learning algorithm(i.e. training script)