In this project, we will be using the Summit XL robot to navigate in the willow garage environment. We are using ROS to implement all the functionalities from scratch. The project fulfills the following requirements:
- Being able to control the robot using the keyboard.
- Incorporating and fusing the two laser scanners and the odometery readings.
- Mapping the environment using known poses.
- Mapping the environment using unknown poses (SLAM).
roslaunch summit_xl_sim_bringup summit_xls_complete.launch
roslaunch rb/code/src/launch/project.launch
We have used the following packages:
- Ira tools to incorporate and fuse the two laser scanners.
- tty, termios To take input from terminal.
For this task we have used an efficient implementation for the reflective model, using the hits and missed algorithm.
We create two replicas for the map, one for the hits and one for the misses. We then use the following formula to calculate the probability of a cell being occupied:
We have multiplied by 100 as the probability is a number between 0 and 1, and we want to have a number between 0 and 100 our occupancy grid.
So, how did we get the hits and misses? We have used the laser scanner readings to update the map. We get the the robot's position and calculate the end point of the laser beam. We then use the Bresenham's algorithm(actually we have created a more efficient implementation using vectorization) to get all the cells that the laser beam passes through. We then update the map accordingly.
For this task we have used the same map function as the one used for the known poses. However, we have used Kalman filter to estimate the robot's pose. We have used the laser scanner readings to update the map. We have used the following formula to calculate the robot's pose:
Get the robot's velocity to use it in the prediction step:
We can ignore the
Prediction step:
Correction step:
Where
Kalman gain:
We have used k=0.6 to simulate the uncertainty in the laser scanner readings.
We have used linear equations to calculate the robot's pose, so we didn't have to use the Jacobian matrix or extended Kalman filter.
- control: This node is responsible for controlling the robot's movement using the keyboard.
- combiner: This node is responsible for incorporating and fusing the robot's odometry and laser scanner readings and publish them to the map node.
- map_known: This node is responsible for the mapping the environment using known poses.
- map_unknown: This node is responsible for the mapping the environment using unknown poses (SLAM).
/cmd_vel
: This topic is used to control the robot's movement./robot/robotnik_base_control/odom
: This topic is used to publish the robot's odometry readings./sensor_multi
: This topic is used to publish the robot's incoaperated laser scanner readings./sensors
: This topic is used to publish the robot's odometry and laser scanner readings./occupancy_map
: This topic is used to publish the map of the environment.
- CombinedSensor: This message is used to publish the robot's odometry and laser scanner readings.
roslaunch summit_xl_sim_bringup summit_xls_complete.launch
rostopic list | grep cmd_vel
Note: that you should be running the simulator since that's what publishes to the topic.
rosrun teleop_twist_keyboard teleop_twist_keyboard.py /cmd_vel:=/robot/robotnik_base_control/cmd_vel
rosrun <package_name> <script_name>.py
example:
rosrun sensor sensor.py
catkin_create_pkg <package_name> [depend1] [depend2] [depend3]
mkdir -p <package_name>/scripts
touch <package_name>/scripts/<script_name>.py
example:
catkin_create_pkg sensor rospy
mkdir -p sensor/scripts
touch sensor/scripts/sensor.py
roslaunch ira_laser_tools laserscan_multi_merger.launch
Follow the instructions in the ROS wiki