Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I get RealWorldPoints? #15

Open
iamlegolas opened this issue Dec 5, 2016 · 7 comments
Open

How can I get RealWorldPoints? #15

iamlegolas opened this issue Dec 5, 2016 · 7 comments

Comments

@iamlegolas
Copy link

iamlegolas commented Dec 5, 2016

"So given real world PVector realWorldPoint, the projected coordinate is accessible via:

PVector projectedPoint = kpt.convertKinectToProjector(realWorldPoint);"

Referring to this, what do you mean exactly when you say "realWorldPoint". Is it simply a 3d point out of the Kinect depth stream or something else? How do i get such points?
Also, if possible, could you tell me how to get such a 3d point in Python?

@genekogan

@2075 @kulbhushan-chand @dattasaurabh82

@genekogan
Copy link
Owner

realWorldPoint is a sampled point from the depth map given by the Kinect. for example, sampling a point from the mesh of a person found in the kinect. the example programs show different examples of this, and there are multiple ways to obtain them, depending on what you are trying to do.

this library is written for SimpleOpenNI in java, doing it in python is beyond the scope of this repository.

@iamlegolas
Copy link
Author

@genekogan

PVector realWorldPoint = kpt.getDepthMapAt(startX, startY);
PVector projectedPointUno = kpt.convertKinectToProjector(realWorldPoint);

startX and startY are X and Y coordinates off the Kinect image.
projectedPointUno should be the same point from the Kinect on the Projector but it doesn't seem to be working like that. Can you help me with what's wrong? The calibration's pretty solid.

Please get back soon!

@genekogan
Copy link
Owner

See the calibration example. there is a method there "getDepthMapAt".

PVector getDepthMapAt(int x, int y) {
  PVector dm = depthMap[kinect.depthWidth() * y + x];
  return new PVector(dm.x, dm.y, dm.z);
}

depthMap is an array of PVectors with the entire depth map:

SimpleOpenNI kinect;
PVector[] depthMap = kinect.depthMapRealWorld();

then convertKinectToProjector should work.

@iamlegolas
Copy link
Author

iamlegolas commented Feb 12, 2017

kpt.setDepthMapRealWorld(kinect.depthMapRealWorld());

PVector realWorldPoint = kpt.getDepthMapAt(startX, startY);
PVector projectedPointUno = kpt.convertKinectToProjector(realWorldPoint);

startX and startY are X and Y coordinates off the Kinect image.
projectedPointUno should be the same point from the Kinect Image on the Projector but it doesn't seem to be working like that. What's wrong? :S

I've done everything exactly as you've mentioned it and as it is in all the tutorials.

The x and y that we're passing to the getDepthMap() fn are coordinates from the Kinect image, right?

@genekogan

@iamlegolas
Copy link
Author

iamlegolas commented Feb 15, 2017

`import controlP5.;
import gab.opencv.
;
import SimpleOpenNI.;
import KinectProjectorToolkit.
;

// For Kinect's RGB stream + the KPT:
OpenCV opencv;
SimpleOpenNI context;
KinectProjectorToolkit kpt;

PImage currKinectFrameRGB;
int startX, startY, endX, endY;

void setup(){
size(100, 100, P2D);

// Setting up the Kinect:
context = new SimpleOpenNI(this);
if(!context.isInit()){
println("Can't initialize SimpleOpenNI, camera not connected properly.");
exit();
return;
}
context.setMirror(false);
context.enableDepth();
context.enableRGB();
context.alternativeViewPointDepthToImage();

opencv = new OpenCV(this, context.depthWidth(), context.depthHeight()); //What's this for?

// Setting up the KPT:
kpt = new KinectProjectorToolkit(this, context.depthWidth(), context.depthHeight());
kpt.loadCalibration("calibration.txt");
kpt.setContourSmoothness(4);
}

void draw(){
context.update();
kpt.setDepthMapRealWorld(context.depthMapRealWorld());

PVector realWorldPoint = kpt.getDepthMapAt(207, 222);
PVector projectedPointUno = kpt.convertKinectToProjector(realWorldPoint);
realWorldPoint = kpt.getDepthMapAt(293, 312);
PVector projectedPointDos = kpt.convertKinectToProjector(realWorldPoint);

print("ProjPoint1: ");
println(projectedPointUno);
print("ProjPoint2: ");
println(projectedPointDos);
}`

@genekogan This is the very simple program that I'm trying to get to run. The calibration I know is working because I've tested it in the CALIBRATION.pde file. Please take out some time to have a look see.

Regards

@iamlegolas
Copy link
Author

@genekogan Please spare a few minutes of your time and read my previous comment. Thanks

@genekogan
Copy link
Owner

startX and startY in your example should be coordinates in the kinect depth image (usually between 0-640, 0-480 in xy direction, corresponding to the size of the depth image). those two lines will translate this to a projector coordinate which are between 0 and 1 (agnostic to screen size). you still need to multiply these by your projector width and projector height if you haven't done that already. look at projectedPointUno... is it between 0 and 1 on both? if so, try multiplying it by projector width and height.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants