Skip to content

Latest commit

 

History

History
181 lines (132 loc) · 10.1 KB

README.md

File metadata and controls

181 lines (132 loc) · 10.1 KB

NI-SLAM

Non-iterative SLAM for Warehouse Robots Using Ground Textures

pipeline

NI-SLAM is a novel non-iterative, ground-texture-based visual SLAM system for the warehouse robot, which includes non-iterative visual odometry, loop closure detection and map reuse. Our system can provide robust localization in dynamic and large-scale environments using only a monocular camera. Especially, a kernel cross-correlator has been proposed to estimate the translation and rotation between two images. Compared with the traditional motion estimation methods that use feature detection, matching and nonlinear optimization, it is non-iterative and has a closed-form solution, hence it is very efficient and can run in real-time while consuming few computing resources. Besides, as being the image-level registration, it is more robust and accurate when dealing with ground images with few textures or with many repetitive patterns than the feature-based methods.

Authors: Kuan Xu, Zheng Yang, Lihua Xie, and Chen Wang

Video:

euroc

Related Papers

Non-iterative SLAM for Warehouse Robots Using Ground Textures, Kuan Xu, Zheng Yang, Lihua Xie, Chen Wang, arXiv preprint arXiv:1710.05502, 2023. PDF

Kernel Cross-Correlator, Chen Wang, Le Zhang, Lihua Xie, Junsong Yuan, AAAI Conference on Artificial Intelligence (AAAI), 2018. PDF

Correlation Flow: Robust Optical Flow Using Kernel Cross-Correlators, Chen Wang, Tete Ji, Thien-Minh Nguyen, Lihua Xie. International Conference on Robotics and Automation (ICRA), 2018. PDF

Non-iterative SLAM, Chen Wang, Junsong Yuan, and Lihua Xie, International Conference on Advanced Robotics (ICAR), 2017. PDF

If you use NI-SLAM or GeoTracking dataset, please cite:

@article{xu2023non,
  title={Non-iterative SLAM for Warehouse Robots Using Ground Textures},
  author={Xu, Kuan and Yang, Zheng and Xie, Lihua and Wang, Chen},
  journal={arXiv preprint arXiv:1710.05502},
  url={https://arxiv.org/pdf/1710.05502},
  video={https://youtu.be/SbzFBEgfazQ},
  code={https://github.com/sair-lab/ni-slam},
  year={2023},
}

@inproceedings{wang2018kernel,
  title = {Kernel Cross-Correlator},
  author = {Wang, Chen and Zhang, Le and Xie, Lihua and Yuan, Junsong},
  booktitle = {Thirty-Second AAAI Conference on Artificial Intelligence (AAAI)},
  pages = {4179--4186},
  year = {2018},
  url = {https://arxiv.org/pdf/1709.05936},
  code = {https://github.com/sair-lab/KCC},
}

@inproceedings{wang2018correlation,
  title = {Correlation Flow: Robust Optical Flow Using Kernel Cross-Correlators},
  author = {Wang, Chen and Ji, Tete and Nguyen, Thien-Minh and Xie, Lihua},
  booktitle = {2018 International Conference on Robotics and Automation (ICRA)},
  pages = {836--841},
  year = {2018},
  url = {https://arxiv.org/pdf/1802.07078},
  code = {https://github.com/sair-lab/correlation_flow},
}

@inproceedings{wang2017non,
  title = {Non-iterative SLAM},
  author = {Wang, Chen and Yuan, Junsong and Xie, Lihua},
  booktitle = {International Conference on Advanced Robotics (ICAR)},
  pages = {83--90},
  year = {2017},
  organization = {IEEE},
  url = {https://arxiv.org/pdf/1701.05294},
  video = {https://youtu.be/Ed_6wYIKRfs},
  addendum = {Best Paper Award in Robotic Planning},
}

Test Environment

Dependencies

  • OpenCV 4.2
  • Eigen 3
  • Ceres 2.0.0
  • FFTW3
  • ROS noetic
  • Boost
  • yaml-cpp
  • VTK

Build

    cd ~/catkin_ws/src
    git clone https://github.com/sair-lab/NI-SLAM.git
    cd ../
    catkin_make
    source ~/catkin_ws/devel/setup.bash

Run

Modify the configuration file in configs and then run

rosrun ni_slam ni_slam src/kcc_slam/configs/your_config.yaml

Data

GeoTracking Dataset

geotracking_dataset

Our data collection platform is a modified Weston SCOUT Robot. The robot is equipped with an IDS uEye monocular camera, which is positioned at the bottom and facing downward, placed at a height of 0.1m above the ground. To ensure constant illumination, a set of LED lights are arranged around the camera. For ground truth, a prism is installed on the top of the robot, and its position is tracked by a Leica Nova MS60 MultiStation laser tracker.

We collect the data of 10 common ground textures, including 6 $\color{lightblue}{outdoor}$ textures and 4 $\color{red}{indoor}$ textures. The table below provides detailed information and download links for each sequence. The camera parameters can be found here.

Sequence Name Total Size Length Download Link
Brick_seq1 1.0g 14m Link
Brick_seq2 0.9g 25m Link
Carpet1_seq1 1.7g 43m Link
Carpet1_seq2 1.7g 38m Link
Carpet2_seq1 3.0g 41m Link
Carpet3_seq1 0.7g 17m Link
Carpet3_seq2 0.7g 19m Link
Carpet3_seq3 1.0g 45m Link
Coarse_asphalt_seq1 1.2g 16m Link
Concrete_seq1 1.0g 23m Link
Concrete_seq2 0.9g 24m Link
Fine_asphalt_seq1 1.1g 22m Link
Fine_asphalt_seq2 1.3g 28m Link
Granite_tiles_seq1 1.2g 27m Link
Granite_tiles_seq2 1.6g 41m Link
Gravel_road1_seq1 0.8g 18m Link
Gravel_road2_seq1 2.1g 46m Link

Run with Your Data

The data should be organized in the following format:

dataroot
├── image_names.txt
├── rgb
│   ├── 00001.png
│   ├── 00002.png
│   ├── 00003.png
│   └── ......
└── times.txt

where image_names.txt contains the image names in /dataroot/rgb and times.txt contains the corresponding double type timestamps.

Experiments

Data Association

data_association

We compare the data association of our system with ORB and SIFT on the HD Ground dataset. The numbers of features and matching inliers are given. For the KCC, the correction results are projected to three coordinate axes and represent the estimation of the 3-DOF movement. The vertical axis is the confidence of estimated movement on the horizontal axis. The higher the value of the peak relative to other positions, the greater the confidence of motion estimation. The results show that the data association of KCC is more stable for various ground texture images.

Visual Odometry

This experiment is conducted on the GeoTracking dataset. The left figure shows the trajectories produced by our system and GT-SLAM on 4 sequences. The right figure provides the comparison of error distributions of different systems on the Gravel_road2_seq1 sequence, where the vertical axis is the proportion of pose errors that are less than the given error threshold on the horizontal axis.

Loop Closure

These two figures show the performance difference of NI-SLAM with and without loop correction on the Fine_asphalt_seq2 sequence. It is seen that the pose errors are significantly decreased after the loop correction.