ROS 2 wrapper for the ZED SDK
-
Updated
May 24, 2024 - C++
ROS 2 wrapper for the ZED SDK
pySLAM-D is a real-time SLAM algorithm for UAV aerial stitching. Includes additional features and refactored code inspired by BU's implementation https://github.com/armandok/pySLAM-D
RGBD-3DGS-SLAM is a monocular SLAM system leveraging 3D Gaussian Splatting (3DGS) for accurate point cloud and visual odometry estimation. By integrating neural networks, it estimates depth and camera intrinsics from RGB images alone, with optional support for additional camera information and depth maps.
"Visual-Inertial Dataset" (RA-L'21 with ICRA'21): it contains harsh motions for VO/VIO, like pure rotation or fast rotation with various motion types.
RGB-D Encoder SLAM for a Differential-Drive Robot in Dynamic Environments
A bunch of state estimation algorithms
visual odometry implementation w/ stereo cameras for localization in autonomous driving
visual (inertia) odometry of a drone with a monocular camera
An Illumination-Robust Point-Line Visual Odometry (IROS 2023)
Comparing the performance of the DeepVO network under different loss functions
An Invitation to 3D Vision: A Tutorial for Everyone
Deep Learning for Visual-Inertial Odometry
OAKD-series camera development demo
Implementation of the paper "Transformer-based model for monocular visual odometry: a video understanding approach".
COMO: Compact Mapping and Odometry
Underwater Dataset for Visual-Inertial Methods and data with transitioning between multiple refractive media.
Visual Odometry Pipeline using KITTI dataset
Notes and assignements of Self-Driving Cars Specialization from the University of Toronto on Coursera.
Building a full Visual SLAM pipeline to experiment with different techniques
Add a description, image, and links to the visual-odometry topic page so that developers can more easily learn about it.
To associate your repository with the visual-odometry topic, visit your repo's landing page and select "manage topics."