Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: Convert Azure Kinect Joint Point to 3D pixel point #146

Open
LeebHelmut opened this issue May 6, 2021 · 1 comment
Open

Question: Convert Azure Kinect Joint Point to 3D pixel point #146

LeebHelmut opened this issue May 6, 2021 · 1 comment

Comments

@LeebHelmut
Copy link

Hello,
until now we were using the ToColorSpace method from the IDepthDeviceCalibrationInfo to get the different 2D pixel points of a body.

For further investigations of what a person does we would like to get the depth information as well. Is there a possibility to transfer the body points (e.g. body.Joints[joint].Pose.Origin) to 3D pixel points?

Many thanks in advance

@sandrist
Copy link
Contributor

sandrist commented May 6, 2021

Here are a couple different approaches I could think of:

  1. Get the color space pixel point as before, but then pass that to ProjectToCameraSpace(IDepthDeviceCalibrationInfo, Point2D, Shared<DepthImage>) method in CalibrationExtensions.cs.

  2. Compute a Line3D from the camera position through the 3D joint position, and intersect with the depth mesh using IntersectLineWithDepthMesh(ICameraIntrinsics depthIntrinsics, Line3D line, DepthImage depthImage) method in CalibrationExtensions.cs.

Hopefully one of those can work for you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants