You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here is a small issue about the mirrored video results (flipped the left and right). It will appear when we rendering the DyNeRF dataset.
I found that the poses returned by average_poses and viewmatrix in 4DGaussians/scene/neural_3D_dataset_NDC.py were left-handed.
Should we change these to right-handed instead?
i.e.,
# 4. Compute the x axisx=normalize(np.cross(y_, z)) # (3)# 5. Compute the y axis (as z and x are normalized, y is already of norm 1)y=np.cross(z, x) # (3)
and
m[:3] =np.stack([vec0, vec1, vec2, pos], 1)
The text was updated successfully, but these errors were encountered:
Thank you for your suggestion!
Rendering validation video will not affect experiment results in my paper and only for visualization. And I'll fix it in the next version.
Hi,
Thanks for your nice work!
Here is a small issue about the mirrored video results (flipped the left and right). It will appear when we rendering the DyNeRF dataset.
I found that the poses returned by
average_poses
andviewmatrix
in 4DGaussians/scene/neural_3D_dataset_NDC.py were left-handed.Should we change these to right-handed instead?
i.e.,
and
The text was updated successfully, but these errors were encountered: