You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the work and the very clean code. I'm running run_slam.py on my own RGBD dataset and while it got off to a good start, at one point tracking seems to have failed because cam_quad_err and cam_trans_err are NaN.
This kills the script because a singular matrix cannot be inverted:
Traceback (most recent call last):
File "/home/abhishek/Code/Gaussian-SLAM/run_slam.py", line 111, in <module>
File "/home/abhishek/Code/Gaussian-SLAM/src/entities/gaussian_slam.py", line 155, in run
opt_dict = self.mapper.map(frame_id, estimate_c2w, gaussian_model, new_submap)
File "/home/abhishek/Code/Gaussian-SLAM/src/entities/mapper.py", line 221, in map
"render_settings": get_render_settings(
File "/home/abhishek/Code/Gaussian-SLAM/src/utils/utils.py", line 93, in get_render_settings
cam_center = torch.inverse(w2c)[:3, 3]
torch._C._LinAlgError: linalg.inv: The diagonal element 2 is zero, the inversion could not be completed because the input matrix is singular.
Traceback (most recent call last):
File "/home/abhishek/Code/Gaussian-SLAM/run_slam.py", line 111, in <module>
File "/home/abhishek/Code/Gaussian-SLAM/src/entities/gaussian_slam.py", line 155, in run
opt_dict = self.mapper.map(frame_id, estimate_c2w, gaussian_model, new_submap)
File "/home/abhishek/Code/Gaussian-SLAM/src/entities/mapper.py", line 221, in map
"render_settings": get_render_settings(
File "/home/abhishek/Code/Gaussian-SLAM/src/utils/utils.py", line 93, in get_render_settings
cam_center = torch.inverse(w2c)[:3, 3]
torch._C._LinAlgError: linalg.inv: The diagonal element 2 is zero, the inversion could not be completed because the input matrix is singular.
I also notice color_loss and depth_loss to be 0.00000, not just for the RGBD frame at which NaN occurred but for several frames before, although they didn't have cam_quad_err and cam_trans_err as NaN which I guess allowed gslam.run() to keep going. In the frames leading up to NaN, the tracking errors are really high (for the frame right before it's cam_quad_err: 0.53914, cam_trans_err: 3293494.00000) and looking at the RGBD renders in the folder mapping_vis, the scene has completely fallen apart.
On scrolling back up to the logs where cam_trans_err had saner values like 0.33, the deterioration in tracking seems to have started for a frame where tracking iterations were doubled, presumably to cope with "higher initial loss":
Higher initial loss, increasing num_iters to 400
There on out, cam_trans_err kept growing: 0.58735, 0.98641, 1.39726, 2.21376, 3.07547, 3.95557, 4.83956, ...
The text was updated successfully, but these errors were encountered:
May I ask what type of scene the data is recorded on, room-scale or larger, and how large are the motions between frames in general? The tracking could fail under large motions. If haven't done, you could try the config we used for scannet++ dataset and see if there's any difference.
Thanks for the work and the very clean code. I'm running
run_slam.py
on my own RGBD dataset and while it got off to a good start, at one point tracking seems to have failed becausecam_quad_err
andcam_trans_err
are NaN.This kills the script because a singular matrix cannot be inverted:
I also notice
color_loss
anddepth_loss
to be0.00000
, not just for the RGBD frame at which NaN occurred but for several frames before, although they didn't havecam_quad_err
andcam_trans_err
as NaN which I guess allowedgslam.run()
to keep going. In the frames leading up to NaN, the tracking errors are really high (for the frame right before it'scam_quad_err: 0.53914, cam_trans_err: 3293494.00000
) and looking at the RGBD renders in the foldermapping_vis
, the scene has completely fallen apart.On scrolling back up to the logs where
cam_trans_err
had saner values like0.33
, the deterioration in tracking seems to have started for a frame where tracking iterations were doubled, presumably to cope with "higher initial loss":There on out,
cam_trans_err
kept growing:0.58735
,0.98641
,1.39726
,2.21376
,3.07547
,3.95557
,4.83956
, ...The text was updated successfully, but these errors were encountered: