Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Controlling LBR IIWA incorporating kinect #154

Open
usman2k opened this issue Dec 19, 2017 · 10 comments
Open

Controlling LBR IIWA incorporating kinect #154

usman2k opened this issue Dec 19, 2017 · 10 comments

Comments

@usman2k
Copy link

usman2k commented Dec 19, 2017

I need to do small task with LBR IIWA Kuka and using kinect 3d camera. I need to command manipulator by kinect. I know it is a hand eye calibration but how can i make interference between kinect and manipulator by using C++ codes.

@songtao43467226
Copy link

songtao43467226 commented Dec 19, 2017 via email

@usman2k
Copy link
Author

usman2k commented Dec 19, 2017

Thanks for you kind response... But I am talking about C++ codes

@ahundt
Copy link
Owner

ahundt commented Dec 19, 2017

I'm assuming you know how to program in C++ here, but you will need to combine two libraries in a single program.

Have you ever used either V-REP or ROS?

@ahundt ahundt changed the title I need to do small task with LBR IIWA Kuka and using kinect 3d camera. I need to command manipulator by kinect. I know it is a hand eye calibration but how can i make interference between kinect and manipulator by using C++ codes. Controlling LBR IIWA incorporating kinect Dec 19, 2017
@usman2k
Copy link
Author

usman2k commented Dec 20, 2017

Well, I want to do my task without ROS. I am familiar with vrep. Which two libraries ... As i guess one is opencv?

@ahundt
Copy link
Owner

ahundt commented Dec 20, 2017

sure, GRL is designed to be used without ROS and that is easy to set up, but it will take a lot more work to integrate kinect data accurately without ROS. The difficulty won't be with making the kuka move but getting good data from the kinect that tells the kuka the right place to go. To command robots accurately in absolute position/orientation you need to calibrate the depth camera. Additionally, hand eye calibration of the robot cannot be done until the kinect camera is calibrated first.

What kind of vision are you doing? Are you developing an application or doing research?

How accurate do you need your motion to be and how will you detect the objects? Note that kinect is fairly low resolution and kinectv2 point clouds can be inaccurate and fail when there are reflections.

I need more details to give you a sound recommendation, but if you just need to grab kinect frames in C++ code you can use https://github.com/stevenlovegrove/Pangolin, and if you just need to command the robot to go somewhere in joint space you can use the KukaDriver class. If you need to command the robot in cartesian space the best way to do that will again depend on your problem.

@usman2k
Copy link
Author

usman2k commented Dec 20, 2017

First of all , I need to thank you for you kind response.
I am doing my research and I need to do some material handling task and I have two queries. Firstly, How can learn C++ coding for robotics specifically for IIWA KUKA... And secondly, I dont know how to make interfarance between manipulator and camera (How can I merge object detection coding with manipulator coding). Well, I already did kinect (object detection) task.

@ahundt
Copy link
Owner

ahundt commented Dec 20, 2017

So you have a kuka and kinect object detection code?

What language is the detection code in?

What kind of material and do you have a gripper?

@usman2k
Copy link
Author

usman2k commented Dec 20, 2017

I have object detection code in C++ but I dont have code for robot.
My object is powder coated car part (black color). yes I have gripper.

@ahundt
Copy link
Owner

ahundt commented Dec 21, 2017

Well the easiest way to get started is to build and install grl and try running the V-REP simulation. That will let you drive the robot to a series of locations. Once you get that working and you are able to drive the robot to a series of cartesian positions you can go to the next step.

This can be done easily with V-REP's integrated lua API (for a first test).

After that, you can decide if you want to write a C++ V-REP plugin for the vision sensor or if you want to write something custom. Based on what you describe I think a simple plugin would be the way to go for getting things to work quickly.

http://www.coppeliarobotics.com/helpFiles/index.html

Are you able to install GRL?

@usman2k
Copy link
Author

usman2k commented Dec 23, 2017

Thanks again for yoUr kind response...
Yes I can install GRL by follow GRL manual. But Is it possible that first I will make simulation through V-rep using C++ code (add GRL library) without connect with actual manipulator.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants