Demos for the Fetch platform at the RAIL Lab @ Georgia Tech
- Building a navigation map
- Quickly launching the Fetch navigation stack
- Automatically navigate to the Fetch charging station and begin docking
- Track the head of the Fetch robot to the nearest face
- Follow a user around using a combination of face and leg tracking data from the RGBD camera and laser scan
Prerequisites:
- gmapping:
sudo apt install ros-indigo-gmapping
Running:
roslaunch fetch_demos build_map.launch
When done mapping, save the map with:
rosrun map_server map_saver -f <map_directory/map_name>
The map saver will create two files in the specified map_directory. The directory must already exist. The two files are map_name.pgm and map_name.yaml. The first is the map in a .pgm image format, and the second is a YAML file that specifies metadata for the image.
The fetch_navigation
provides its own map building launch file but this uses a SLAM library that is prone to crashing. The map building launch file specified here uses a different SLAM library called gmapping
which fixes this issue.
Running:
roslaunch fetch_demos navigation.launch
Will bring up essential services and begin publishing the map from maps/
, laser scanner, and others. A prerequisite for most Fetch tasks. Ensure that the directory of the map image indicated by map.yaml
is correct or else the map server will exit early without a message.
Prerequisites:
sudo apt-get install ros-indigo-fetch-auto-dock
roslaunch fetch_auto_dock auto_dock.launch
- If using a new map file, be sure to set the correct dock position statically within
src/dock.py
- For more information: see Fetch documentation
Running:
rosrun fetch_demos dock.py
- Will navigate to dock position, begin auto dock procedure, and then exit
Prerequisites:
rail_people_detector
. We use the Willow Garage face detector by default.
Running:
roslaunch fetch_demos track_face.launch
- Will run the face detection library and point the Fetch head at the nearest detected face in-frame continuously
Prerequisites:
- Fork of the wg-perception's people library:
- The common ROS packages installed with
sudo apt install ros-indigo-people
are missing the launch file that we use on the robot.
- The common ROS packages installed with
- Fork of the
leg_tracker
library on thefetch
branch: download to robot's catkin workspace and build withcatkin_make
- To just track legs without following people around, run
roslaunch leg_tracker joint_leg_tracker.launch
- Documentation of
leg_tracker
: https://github.com/petschekr/leg_tracker - This fork configures the leg tracking software for the Fetch's laser scanner and supports OpenCV 2 which is required by other packes on the Fetch platform. The upstream code requires OpenCV 3 which is not currently supported by ROS Indigo on Fetch.
- To just track legs without following people around, run
The above prerequisites can be obtained by cloning rail_people_detection
and initializing the submodules with git submodule init
and git submodule update
.
Running:
roslaunch fetch_demos follow.launch
The following demo tracks faces using the RGBD camera and then matches the nearest detected face to a pair of legs which it then tracks and moves the robot to follow. The wider field of view of the laser scanner allows for reliable person tracking even with multiple people in the same area or for quick movement. The leg tracking library will also predict leg movements if they go behind an obstacle temporarily. This following code implements filtered collision detection that can differentiate between obstacles in the robot's path and legs which should be tracked.