You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! Is there going to be support for handling unbounded scenes via background models with re-parameterized inputs in the near future? If not, would it be possible for you to provide some tips on how to best implement that?
The text was updated successfully, but these errors were encountered:
Hi @Salarios77 , agreed that would indeed be a useful thing to have.
We should implement that, but until we do, here is a quick workaround for you, which involves extending the current tracer with one that queries some background model:
Add a new forward_bg() function to your neural field (BaseNeuralField subclass). The implementation of this function should accept, i.e., a tensor of rays.
Duplicate PackedRFTracer to create a customized version of your own tracer. You can find it here.
Remove these lines which delete the rays tensor, as you'll need it soon as input.
These lines determine the blending with the background color. Instead of assuming a fixed color (i.e. 1.0 - alpha is assumed to be multiplied with a "white" color here ), query the forward_bg function of your nef instance for a predicted background color.
Hi! Is there going to be support for handling unbounded scenes via background models with re-parameterized inputs in the near future? If not, would it be possible for you to provide some tips on how to best implement that?
The text was updated successfully, but these errors were encountered: