Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Render depth map from the mesh #490

Open
mleotta opened this issue May 28, 2021 · 3 comments
Open

Render depth map from the mesh #490

mleotta opened this issue May 28, 2021 · 3 comments
Assignees

Comments

@mleotta
Copy link
Member

mleotta commented May 28, 2021

No description provided.

@mleotta mleotta created this issue from a note in Manual Calibration tools (To do) May 28, 2021
@mleotta
Copy link
Member Author

mleotta commented May 28, 2021

Add an algorithm to the compute menu to render the mesh to a depth image using the active camera models. Use existing code in kwiver to render mesh to depth image.

@borovik135
Copy link
Collaborator

borovik135 commented May 28, 2021

Note that this could be closely related to the rendering mesh in camera view task, as the depth field can be directly derived from the z-buffer of the mesh rendering, distorted or not. In theory, we should be able to kill both birds with one stone: cam view mesh rendering & depth view.

@mleotta
Copy link
Member Author

mleotta commented May 29, 2021

In theory this is true, but practically there may be benefits to separating this processing.

  1. The z-buffer in an OpenGL is quantized to integer values. This is good enough for depth testing, but many not be precise enough for some other applications.

  2. Some KWIVER users with constrained hardware may not have a GPU. KWIVER already has a CPU code to render depth maps from meshes at double precision. TeleSculptor users would generally have a GPU, but making this a KWIVER algorithm opens this up to new use cases. We could have both CPU and GPU (via VTK) algorithms to trade speed and accuracy.

  3. I'd like the rendering of the mesh in the camera view to be as dynamic as possible. That is, re-rendering the current field of view as you pan and zoom at the native screen resolution. Including rendering parts of the scene that fall outside of the image bounds. This means the z-buffer for rendering many not always cover the full image at the same resolution of the original image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants