Skip to content

Make Baxter sort cans and bottles for recycling using computer vision and motion planning

License

Notifications You must be signed in to change notification settings

YaelBenShalom/Recycler-Baxter

Repository files navigation

Recycle Sorting Baxter

ME495: Embedded Systems Final Project (Fall 2020).
Take a look at my portfolio post for more information about the project.

Table of Contents

Group Members

  • Chris Aretakis
  • Jake Ketchum
  • Yael Ben Shalom
  • Kailey Smith
  • Mingqing Yuan

Project Overview

In this project, we programmed a Rethink Baxter robot to sort bottles and cans located in front of it, and drop them into separate recycle bins. We used computer vision to detect and locate a couple of randomly placed bottles and cans, and used MoveIt to control the robot.

The Baxter Robot (Scott's Bot) in action:

View the full demo here.

A Google slides presentation summarizing the project can also be viewed here.

User Guide

Dependencies Installation

  1. Install the Intel Realsense packages. For installation instructions, visit the librealsense page .

  2. Install sphinx documentation:

sudo apt install python3-sphinx
sudo apt install ros-noetic-rosdoc-lite

Quickstart Guide

  • In the /src directory of your catkin workspace, download can_sort.rosinstall
  • While still in the /src directory run wstool init to initialize the workspace
  • To merge the workspace with the rosinstall file, run wstool merge can_sort.rosinstall
  • To ensure you have the latest version of all packages, run wstool update
  • Source and catkin_make your workspace
  • To use the Baxter, plug its ethernet cord into your computer
  • To connect to the Baxter, move to your workspace directory and run source src/can_sort/Baxter_setup.bash
    • To ensure you have connected successfully, run ping baxter.local
  • Enable the robot using rosrun baxter_tools enable_robot.py -e
    • If you are having issues connecting to Baxter, please follow the full instructions outlined here.
  • To start sorting run rosrun baxter_interface joint_trajectory_action_server.py &
  • Then run roslaunch can_sort baxter_move.launch object_detection:=true
  • Watch in awe as the Baxter sorts and recycles cans and bottles!

System Architecture and High Level Concepts

Nodes

object_detection.py - Object detection node. The node uses the pyrealsense2 library to get an image from the Realsense depth camera, and use the computer vision library OpenCV to detect and classify the objects in the image. This node classifies 3 different types of objects:

  1. Calibration points - 2 points in known position, used to calibrate the image and convert the units from pixels to meters (painted in green).
  2. Cans - Unknown number of cans in unknown positions (painted in red).
  3. Bottles - Unknown number of bottles in unknown positions (painted in blue).

After the classification process, the node returns a painted image (with circles in different radiuses and colors for each object), and saves the location and classification of the cans and bottles on Board.srv service (when called).

To call the Board.srv service, open a new terminal and run rosservice call /board_state "use_real: true" when the object_detection node is running.

The painted image:

recycle.py - Robot operation node. This node uses ROS's MoveIt! library for motion planning and manipulation (mainly the compute_cartesian_path command). After initializing the move group commander for Baxter's right arm, this node adds a table to the planning scene to ensure that the robot does not collide with the table. A proxy DisplayImage.srv service is also created. The arm then moves to a predetermined position out of the camera's field of view and calls the Board.srv service. This service returns a list with the position and classification of all bottles and cans in the camera's view. In the last portion of the set-up the robot moves to a predetermined orientation to ensure smooth and predictable motion of the arm (This desired configuration was determined after testing).

With the objects locations and classifications known, the robot then works through a while loop for the entirety of the list length. The loop functions as follows:

  1. Move to the home position where the robot is safely above all objects.
  2. For the current item in the list, display either the can image or the bottle image, depending on the classification.
  3. Next, move to the object's (x,y) coordinate at a safe z height away. This is the same height for bottles and cans.
  4. Then move down to the appropriate perch height, depending on classification. (For example, the robot arm will be position further away from the table for bottle, since those are taller than cans).
  5. Once safely at the perch height, move down so that the object is in between the grippers.
  6. Grasp the object.
  7. Move back up to the "safe" position as step 3.
  8. Move back to the home position. This step was added to ensure predictable behavior of the robot arm.
  9. Depending on the object's classification, move to the appropriate bin. Also, display the recycling image.
  10. Once over the bin, open the grippers and drop the object. Show that the object has been recycled with the bin image.
  11. Repeat for all objects found.

The robot motion:

disp_img.py - Displays an image on the Baxter's head display. The node converts the image to imgmsg (using OpenCV), and publishes the message to the /robot/xdisplay display using DisplayImage.srv service.

calibration.py - Python library that responsible for the calibration of the camera's output (converts a point from pixels to meters). The script gets the coordinated of 2 calibration points in pixels, and converts it to meters, using linearization. The library returns the linearization constants:

  • x(meters) = m * x(pixels) + n
  • y(meters) = a * y(pixels) + b

The object_detection node uses this library to convert the points found on the image from pixels to meters.

Launch Files

baxter_move.launch - This launch file launches both recycle node and object_detection node. The recycle node runs along with joint_trajectory_server which is required in to plan the trajectory in MoveIt. Also, this launch file includes two files (baxter_grippers.launch and trajectory_execution.launch) from baxter_moveit_config which is in the MoveIt! Robots package.

camera.launch - This launch file launches The object_detection node (including lauding the parameter server). This launch file is for test and debug purposes only, because it does not activate the entire system. To activate the entire system, run the baxter_move.launch launch file.

Test Files

test_calibration.py - A test file that tests the python calibration library. The test file tests the calibration accuracy using 2 points with known pixel-meter conversion:

  1. point1 = [722.5, 937.5] (pixels) = [0.55, -0.50] (meters)
  2. point2 = [403.5, 417.5] (pixels) = [0.80, -0.10] (meters)

For those points, the pixel values were measured from the image and the meter values were measured physically in the lab using the Baxter.

To run the test file when running catkin_make, run catkin_make run_tests from the root of workspace.

Algorithms and Libraries Used

pyrealsense2 - Library for accessing Intel RealSenseTM cameras.

OpenCV - Computer vision library. Used to detect and classify the items on the image.

MoveIt - Motion planning library. Used to generate high-degree of freedom trajectories to grab the objects and throw them to the trash bins.

MoveIt! Robots - Motion planning library. It contains baxter_moveit_config package which is required for the operation of this project.

JTAS - Joint Trajectory Action Server. Enables executing complex trajectories using software built-in to Baxter.

Machine Learning Perception Pipeline

In order to make the package compatible with the machine learning perception pipeline suggested in my Objects Recognition and Classification project, I added an adjusted recycle node (recycle_ML.py) and an adjusted baxter_move launch file (baxter_move_ML.launch).
To launch the package with the new detection method, follow the instructions on the Objects Recognition and Classification package (download the dataset, create and train the model, etc) and launch the baxter_move_ML.launch launch file.

Physical Equipment

  1. Baxter Rethink robot
  2. Realsense D435i depth camera
  3. Table
  4. 2 trash bins
  5. Cans and bottles
  6. 3D Printed Bottle/Can Gripper Attachments (see CAD image and drawing below):
  • This gripper was designed to work with most plastic bottles and aluminum cans.
  • The grippers are printed with PLA plastic, although most rigid 3D Printed materials would be appropriate.
  • The grippers are designed to have 1/4" Thick Soft foam adhered to their inner radius, allowing the foam to conform to the bottle and provide extra grip.
  • Make sure to check the shrinkage of your 3D printer and scale the post cutouts appropriately so the attachments can attach to Baxter's stock Gripper Posts.
  • The CAD part and drawing files for the 3D Printed Gripper Attachment for Baxter can be found in the CAD Folder of this repository.
  • They can also be exported from OnShape by following this link: CAD and Drawing

Future Work

  1. Use machine learning algorithms for better objects classification - Now, we can only classify specific shapes of bottles and cans. By using machine learning methods, we could classify different types of bottles and cans with the same label, and throw them to the same trash bin.
  2. Add the ability to detect more types of items - Now, we can only detect cans and bottles. In the future, we want to be able to detect and recycle a variety of objects, such as paper or different types of plastic. To do so, we need to improve our computer vision node (to detect those items), and improve our gripper.
  3. Implement the 3D-Printed Grippers - We did not end up having time to implement the 3D-Printed Grippers for our testing. Using the stock grippers did not provide a very secure grip - so we had to slow down the robot's motion to prevent the bottle from flying out. Using the 3D Printed Grippers with foam padding would allow for a more secure grip, which would let us speed the robot back up. It would also allow grabbing a greater variety of cylindrical objects (from the body) due to the foam's conformability.
  4. Use the Baxter's hand camera to improve gripping accuracy - We are currently relying solely on the realsense camera to determine the the object location and grasping positions. However, a more robust solution would be to use the baxter camera and ensure that a) an object is being grasped and b) the object being being grasped is actually in the center of the grippers. With the hand camera video, we would be able to adjust and center the center the gripper to ensure there are no object-gripper collisions.