Skip to content

femartip/Robotics

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Robotics

Model of the world we are working with:

image

KM&EKF - Kalman Filter & Extended Kalman Filter

  • This folder contians variuous implementations for a Turtlebot. This was simulated using the ROS environment. To try the programms, the "kalman_simulation" is needed, and should be placed as a ROS package.

-> KFBasicMotion.py; This program uses the kalman filter for a basic turtlebot, where the motion is based on a relative position between the beacon and the robot frame. The robot uses spherical wheels and only moves in a straight line. Result:

image

To try this, launch in ros exercise_1_spherical_wheels.launch

->KFWheelDiameter.py; Given the previous model, know we do not know the diameter of the wheels and we want to find it with a KF. Result:

image

To try this, launch in ros exercise_2_wheel_diameter.launch

->EKF_For_Turtlebot.py; Given the first model, we change the wheels so instead of beeing spherical, they are circular. This enables us two type of motions, rotation and translation. This is a real model of the turtlebot. Result:

image

To try this, launch in ros exercise_4_normal_wheels.launch

->EKF-Distance&Direction.py; This model only uses the simple translation as in the first model. However know it is not based on beacons, but on distance and direction. This adds more noise. Result:

image

To try this, launch in ros exercise_5_sensor_model.launch

SLAM - Simultaneous Localization and Mapping

  • This folder contians variuous implementations for a Turtlebot. This was simulated using the ROS environment. To try the programms, the "slam_simulation" + "turtlebot3_costum" is needed, and should be placed as a ROS package.

->EKF_dead_reckoning.py; EKF with dead reckoning, this is without sensor data. So it only estimates it state. This makes uncertainty increase over time. Result:

image

To try this, launch in ros demo_1_env.launch

->SLAM+KF.py; Robot estimates relative position to beacons, without knowing the beacons position. This is estimated using KF. Result:

image

To try this, launch in ros exercise_2_env.launch

->SLAM+PF.py; The same problem as in the previous program, but resolved applying particle filter without resampling. Result:

image

To try this, launch in ros exercise_3_env.launch

->SLAM+KF_loop_clousure.py; More realistic aproximation where robot does not see always all the beacons, here it only sees beacons that are in a 65º area in front of it. Implemented using KF. Noise increments over distance. Result:

image

To try this, launch in ros demo_2_env.launch

->SLAM+KF_landmark_association.py; Building up from previous model, know when robot sees and detects a beacon, it does not know which one is. This implementation adds a new beacon every measurement (using KF). Then beacons are merged by deciding they are the same one. Result:

image *Without merging beacons.

image image *When merging

To try this, launch in ros exercise_5_env.launch

->PF+EKF_Fast_SLAM.py; This implementation uses the same model as before, however know we use the PF ony to sample robots trajectory. Then for each samples trajectory, treates beacon position with KF. Result:

image

To try this, launch in ros exercise_6a_env.launch

->Fast_SLAM+l_a+resampling.py; Same implementation as before, but with the addition that we add landmark association and resampling to our model. Result:

image

To try this, launch in ros exercise_6bc_env.launch

DP - Dynamic Programming

  • Now we are working with a model of a maze where the robot needs to find a way towards the reward (star). The robot can only move N,S,E,W. All the solutions are implemented with dynamic programming.This was simulated using the ROS environment. To try the programms, the "dp_simulation" + "turtlebot3_costum" is needed, and should be placed as a ROS package. image

->DP_Simple_Maze.py; Simplest model where robot can change in direction with any cost. Result:

image

To try this, launch in ros exercise_1_env.launch

->DP_head&inertia.py; In this model the reward function changes, so it is more costly turning that going forwards, this is beacuse more force needs to be applied so that the robot turns. Result:

image

To try this, launch in ros exercise_2a_env.launch

->DP_Simple_Maze+Uncertainty.py; This model is based on the simple one, however uncertainty is added to the movement. Usually terrain is not perfect and can cause that one wheel spins faster than another, this could cause that eventough we think we are going in a straight line, we are tilting in another direction. Result:

image

To try this, launch in ros exercise_2b_env.launch

->DP_Simple_Maze+Various_Rewards.py; In this case, instead of only considering there is one reward, there are more than one. Result:

image

To try this, launch in ros exercise_2b_env.launch

Kinematic of Robotic Arms

  • Jupiter notenook of direct and inverse robot kinematics using the Denavit-Hartenberg framework.

About

Overview of algorithms used in robotics

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published