The goal of this project was to assemble markers and caps together through sequences of pick, place, press and sort operations. The intent was largely inspired by the application of robots in manufacturing and industry. Our project used a RealSense camera to detect colors of the markers, and MoveIt manipulation commands to actuate the robot. Franka-specific actions also were used to grip caps and markers during movement. The framework of the project was controlled using a state machine developed in the ROS package called SMACH. The state machine intelligence implemented sorting of colors by hue based off of camera data coming from the realsense perception subsystem. Intelligence then leveraged the manipulation to pick, place and press caps and markers in the assembly tray. THe launch files custom to this project are launch_robot.launch (master launch file) and planning_sim.launch (simulation launch file for motion planning in). Ensure that the panda_moveit_config and the franka_control packages are sourced and configured as detailed here: https://nu-msr.github.io/me495_site/franka.html
The robot is run by issuing the following set of commands. To start, the user must connect to the Franka robot and enable ROS by activating the FCI.
The subsequent seuqence of steps are (first ensure a roscore is running, and you have configured your ROS_MASTER_URI appropriately):
-
SSH into the robot using:
ssh -oSendEnv=ROS_MASTER_URI student@station
-
Launch the the franka ros controller using the following command in the SSH terminal:
roslaunch franka_control franka_control.launch robot_ip:=robot.franka.de
-
Up the collision limits for the robot by calling the following node and service:
rosrun group4 limit_set
thenrosservice call /coll_hi
-
Launch the robot manipulation and vision commands using the following launch file:
roslaunch group4 launch_robot.launch
-
Run the state machine to initiate the pick and place sequence using the following command:
rosrun group4 TaskMaster
Download and install the group4.rosinstall file to run the external packages used for this project:
cd group4
vcs import src <path_to_rosinstall_file'.rosinstall>
Open the html
folder inside the doc
directory using a web browser to view the Sphinx Documentation
Incase you want a faster startup method a bash script is set up which at least creates the terminals necessary for start up.
- start by running the command:
bash sourceWS
- This will source the workspace directory in your .bashrc
- Run this command to begin:
bash start
- This should start up every process to get the project started. If it doesn't please refer to the section above.
- When you are done using the start script run this command:
bash wsSourceRemove
- This will remove the workspace from your .bashrc file, only run once.
The manipulation package relies on several different nodes in order to function:
manipulation_cap
provides low level position and orientation sensing services, along with error recovery, movements and gripper graspingmanipulation_macro_a
provides position movement services for image captures using the realsensemanipulation_press
provides a pressing service to cap the markersmanipulation_local
provides manipulation services for moving in between traysmanipulation_pnp
provides pick and place services between the feed and assembly traysdebug_manipulation
logs the external forces experienced by the robotplan_scene
provides a planning scene for simulation based motion planning in Moveitlimit_set
provides services to be used with the franka_control file launched prior to Moveit being launched. It allows the user to reconfigure the collision limits on the robot.
Simulation with RVIZ can be run by running the following commands:
roslaunch group4 planning_sim.launch
Manipulation also relies on a python package called manipulation with translational, array position, and verification utilities. A scene.yaml files is used for specifying parameters in the plan_scene node and the main manipulation movements scene elsewhere in the project.
- run the following command in a terminal:
pip3 install opencv-python
- All the computer vision algorithms are embedded in the
vision
python package, functions in the package can be called in a node byimport <package name>.<script name>
such as
import vision.vision1
-
sample_capture.py
: A helper python script to capture images using realsense 435i rgbd camera- Connect the realsense camera to you laptop
- Run the python script in a terminal:
python3 sample_capture.py
- Press 'a' to capture and save an image and use 'q' to quit the image window
-
hsv_slider.py
: A helper python script to find the appropriate HSV range for color detection- Add the path of the image to
frame = cv.imread()
to read the image - Run the python script in a terminal:
python3 hsv_slider.py
- A window of original image and a window of HSV image with slide bars will show up
- Test with HSV slide bars to find an appropriate range
- Add the path of the image to
-
vision.py
A python script to detect contours and return list of hue values- For testing purpose, an image can be loaded by setting the path to
image = cv.imread()
- Run the python script in a terminal:
python3 vision1.py
- A processed image with contours and a list of hue values will be returned
The node that uses this library is called vision_bridge.
- For testing purpose, an image can be loaded by setting the path to
-
vision_bridge node
: Node that publishes a stream of ROS Images and implements a servicecapture
that returns a list of H values of the detected markers and/or caps from an image.- run
rosservice call /capture
and specify thetray_location
to run the service. tray_location 1 : Represents Assembly Location tray_location 2 : Represents Markers Location tray_location 3 : Represents Caps Location
- run
- If you are interested in editing or changing the behavior we encourage you take a look at SMACHs tutorial at: http://wiki.ros.org/smach. The state machine iterates between a series of states (Standby, Caps, Markers, genMatch, setTarget, Assemble). It relies on a manager python package provided in source for sorting and matching.
- run the following command in a terminal:
sudo apt-get install ros-noetic-smach ros-noetic-smach-ros ros-noetic-executive-smach
- To run task master simply run:
rosrun group4 TaskMaster
The project employs a series of unit tests on the manager python package to verify its matching, sorting and destination-setting functionality.
Run catkin_make run_tests
- The user interface of processed images and Franka arm visualizations on Rviz: https://youtu.be/oCTd5CoBUqM
- The side view of the video record of Franka arm assembling the markers (3.0X faster): https://youtu.be/m37ZtrH2SsE
Contributors: Keaton Griffith, Kojo Welbeck, Bhagyesh Agresar, Ian Kennedy, Jiasen Zheng