Skip to content

Dive into cutting-edge FusionSLAM, where SuperPoint, SuperGlue, Neural Depth Estimation, and Instant-NGP converge, elevating Monocular SLAM to unparalleled precision and performance. Redefining mapping, localization, and reconstruction in a single camera setup.

License

Notifications You must be signed in to change notification settings

jagennath-hari/FusionSLAM-Unifying-Instant-NGP-for-Monocular-SLAM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FusionSLAM-Unifying-Instant-NGP-for-Monocular-SLAM

Dive into cutting-edge FusionSLAM, where SuperPoint, SuperGlue, Neural Depth Estimation, and Instant-NGP converge, elevating Monocular SLAM to unparalleled precision and performance. Redefining mapping, localization, and reconstruction in a single camera setup.

🏁 Dependencies

  1. NVIDIA Driver (Official Download Link)
  2. CUDA Toolkit (Official Link)
  3. ZED SDK (Official Guide)
  4. OpenCV CUDA (Github Guide)
  5. ROS 2 Humble (Official Link)
  6. Miniconda (Official Link)
  7. ZED ROS 2 Wrapper (Official Github Link)
  8. RTAB-Map (Official Github Link)
  9. RTAB-Map ROS 2 (Official Github Link)
  10. PyTorch (Official Link)
  11. Instant-ngp (Official Github Link)
  12. SuperPoint (Official Github Link)
  13. SuperGlue (Official Github Link)
  14. Nlohmann-JSON (Official Github Link)

⚙️ Install

  1. Install all non ROS 2 libraries
  2. Clone all ROS 2 packages into workspace
  3. Clone reporsitory into ROS 2 workspace
  4. colcon build --symlink-install --cmake-args -DRTABMAP_SYNC_MULTI_RGBD=ON -DRTABMAP_SYNC_USER_DATA=ON -DPYTHON_EXECUTABLE=/usr/bin/python3 -DCMAKE_BUILD_TYPE=Release --parallel-workers $(nproc) --executor sequential
  5. source ~/.bashrc or source ROS 2 workspace
  6. Run python trace.py and change path of SuperPoint weights, this will generate a model compatible with your version of PyTorch
  7. Add libtorch path export LD_LIBRARY_PATH=LD_LIBRARY_PATH:../miniconda3/envs/rtabmap/lib/python3.10/site-packages/torch/lib${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}} ensure the path is correct else RTAB-Map will not work

⌛️ SLAM

  1. Run SLAM to generate dataset
  2. ros2 launch ngp_ros2 slam.launch.py rgb_topic:=/zed2i/zed_node/rgb/image_rect_color depth_topic:=/zed2i/zed_node/depth/depth_registered camera_info_topic:=/zed2i/zed_node/rgb/camera_info odom_topic:=/zed2i/zed_node/odom imu_topic:=/zed2i/zed_node/imu/data scan_cloud_topic:=/zed2i/zed_node/point_cloud/cloud_registered superpoint_model_path:=../SuperPointPretrainedNetwork/superpoint_v1.pt pydetector_path:=../rtabmap_superpoint.py pymatcher_path:=../rtabmap_superglue.py detection_rate:=1 image_path:=../images/ transform_path:=../transforms.json
SLAM

SLAM

Your dataset should get created in the image_path along with transforms.json in the transform_path

🖼️ Instant-NGP

  1. cd /instant-ngp/build
  2. ./instant-ngp ../PATH give the path to your dataset where image_path and transforms.json are located
SLAM

NERF

⚠️ Note

  1. Ensure ZED ROS 2 Wrapper is set to run using Neural Depth Mode and Image quality is set to HD1080 for best renders
  2. This Render uses Depth Supervision, feel free to change RTAB-Map and instant-ngp parameters to generate better renders

🔮 Future Updates

  1. Use Pose-Graph from RTAB-Map to include loop closures for better renders
  2. Add Segmentation masks using Semantic Segmentation Network
  3. Generate render using Multi-camera SLAM

About

Dive into cutting-edge FusionSLAM, where SuperPoint, SuperGlue, Neural Depth Estimation, and Instant-NGP converge, elevating Monocular SLAM to unparalleled precision and performance. Redefining mapping, localization, and reconstruction in a single camera setup.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published