Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Obtain normal from NanoVDB's Level Set intersection (ZeroCrossing) #1772

Open
marcardenas opened this issue Mar 5, 2024 · 2 comments
Open
Labels

Comments

@marcardenas
Copy link

Hi all!

Is there any way to compute normal after intersecting ray in a level set grid in NanoVDB? OpenVDB's LevelSetRayIntersector provides a method called intersectsWS() which among other return values, provides you with the normal vector.

Thanks

@marcardenas marcardenas changed the title Obtain normal from NanoVDB's Level Set intersection Obtain normal from NanoVDB's Level Set intersection (ZeroCrossing) Mar 5, 2024
@Idclip Idclip added the nanovdb label Mar 5, 2024
@w0utert
Copy link

w0utert commented May 21, 2024

I'm looking for the same thing, and I'm not having a lot of success trying to calculate this myself.

I was hoping I could use a BoxStencil or GradStencil to get the gradient, by calculating the world-space intersection (auto ws = ray.eye() + t0 * ray.dir()) and transforming that to index-space coordinates (is = grid->worldToIndex(is)) then querying the stencil (stencil.moveTo(is); gradient = stencil.gradient()), but using GradStencil produces a compiler error error: no instance of function template "nanovdb::math::RoundDown" matches the argument list and using BoxStencil I get a gradient but it seems to be quantized to the index-space grid. Maybe I'm missing something about how these stencils work?

Edit:
It seems I'm probably not even understanding how nanovdb::math::ZeroCrossing works, because when plotting just the z-coordinate obtained by ray.eye() + t0 * ray.dir() I see the values are quantized to the voxel grid, ie: if I have a 100x100x100 VDB and raytrace it at 1024x1024 resolution using fractional x,y coordinates, it looks like the intersection is calculated using nearest-neighbor sampling and not interpolated in any way? Is this intentional?

Edit2:
Indeed it seems the t0 returned by nanovdb::math::ZeroCrossing is always an integer value (at least for my VDB with voxel size = 1.0f), which makes it pretty much useless for tracing a level set.

@w0utert
Copy link

w0utert commented May 22, 2024

After studying the NanoVDB documentation & code a little bit more I think I now understand how these are supposed to work.

@marcardenas
As I understand it, the NanoVDB intersector (nanvodb::math::ZeroCrossing) is just the HDDA and it will get you an intersection t0 at voxel resolution (the documentation mentions this: t0 and v will receive the intersection time and grid value of the voxel behind the intersection). This means that to get the actual surface intersection time/coordinates, and subsequently from that the interpolated gradient, you will have perform some iterative approximation along the ray, sampling distance values to determine the iteration step. I use a 2x2x2 sampler created by nanovdb::math::createSampler<1> for sampling distance values and gradients.

I had some success with this approach using a very simple iterative approximation that steps along the ray, in the direction towards the surface as determined by the distance value (negative distance = reverse direction, positive distance = forward direction), halving the step size at each iteration, up to at most 8 steps). The resulting coordinates I use to sample the gradient. This seems to work well for interior points but not for edges of the level-set, as nanovdb::math::ZeroCrossing also returns true for rays that hit a zero-crossing voxel but not actually hit the level-set surface (at least that's my analysis right now).

So all in all it seems the NanoVDB raytracing interfaces are lower level than OpenVDB (which makes sense I guess) and you will have to do more work to get what you want. It would be nice if either the documentation would mention some of this, even better if there was an example of some very simple raytracer that just uses primary rays and shades intersections using ray.dot(normal).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants