A ROS-based person detection and following system for TurtleBot3 robots using YOLO object detection and laser scanner-based obstacle avoidance.
This project implements an autonomous person-following robot that can:
- Detect and track persons using YOLOv8 computer vision
- Follow detected persons while maintaining safe distance
- Avoid obstacles using laser scanner data
- Navigate around obstacles and return to following behavior
- Search for persons when none are detected
- TurtleBot3 Waffle Pi robot
- Raspberry Pi camera module
- LiDAR sensor (for obstacle detection)
- Network connection between robot and control computer
- Ubuntu 20.04 (recommended)
- ROS Noetic
- Python 3.8+
# ROS packages
sudo apt install ros-noetic-image-transport ros-noetic-cv-bridge ros-noetic-vision-opencv
sudo apt install python3-opencv libopencv-dev ros-noetic-image-proc
# Python packages
sudo apt-get install python-is-python3
sudo apt install python3-pip
pip install ultralytics
- Clone this repository to your catkin workspace:
cd ~/catkin_ws/src/
git clone https://github.com/C-H-E-N-Zhihao/Person_Follower_ROS
mv Person_Follower_ROS my_following_person_package
cd my_following_person_package/scripts
chmod +x *.py
- Build the workspace:
cd ~/catkin_ws
catkin_make
source devel/setup.bash
- Set up environment variables:
echo "export TURTLEBOT3_MODEL=waffle_pi" >> ~/.bashrc
source ~/.bashrc
The system requires multiple terminals to run. Follow these steps in order:
roscore
# Connect to the robot
ssh <robot_ip>
# Launch robot base
roslaunch turtlebot3_bringup turtlebot3_robot.launch
# In a new terminal on the robot (or execute crtl+Z), launch camera
roslaunch turtlebot3_bringup turtlebot3_rpicamera.launch
roslaunch turtlebot3_bringup turtlebot3_remote.launch
rosrun my_following_person_package main_person_follower.py
-
PersonDetector (
person_detector.py
)- Uses YOLOv8 for real-time person detection
- Processes camera feed and provides person location data
-
ObstacleDetector (
obstacle_detector.py
)- Processes LiDAR data for obstacle detection
- Monitors front sector (345Β°-15Β°) and left sector (75Β°-105Β°)
- Configurable detection distances
-
RobotController (
robot_controller.py
)- State machine implementation with four states:
- SEARCHING: Rotating to find a person
- FOLLOWING: Moving toward detected person
- AVOIDING: Executing obstacle avoidance maneuver
- WAITING: Stationary when person is centered and obstacle ahead
- State machine implementation with four states:
-
OdomHelper (
odom_helper.py
)- Provides odometry-based rotation control
- Enables precise angular movements for obstacle avoidance
-
MoveKobuki (
move_robot.py
)- Low-level movement interface
- Publishes velocity commands to
/cmd_vel
topic
- SEARCHING: Robot rotates in place looking for a person
- FOLLOWING: Robot moves forward while adjusting direction to center the person
- AVOIDING: Robot executes a 90Β° rotation, moves forward, then rotates back
- WAITING: Robot stops when person is centered but obstacle is detected ahead
/camera/image
(sensor_msgs/Image): Camera feed for person detection/scan
(sensor_msgs/LaserScan): LiDAR data for obstacle detection/odom
(nav_msgs/Odometry): Robot odometry for precise rotations
/cmd_vel
(geometry_msgs/Twist): Robot velocity commands
This project is licensed under the MIT License - see the LICENSE file for details.