Skip to content

yorklyb/LiDAR-SFM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 

Repository files navigation

LiDAR-SFM (paper under review)

In recent years, LiDAR Fiducial Markers (LFMs), which are planar objects designed to provide artificial features for LiDAR sensors, have emerged. However, the potential of LFMs remains unexplored in mapping and localization. To be specific, conventional LiDAR odometry struggles in cases involving dramatic changes in viewpoint and a scarcity of natural features, whereas fiducial markers are beneficial in such situations.

In this study, we propose a novel framework to solve the problems of mapping and localization using LFMs. We first develop an adaptive threshold method to boost the in-the-wild LFM detection. Then, we design a mapping and localization pipeline with two levels of graphs. In particular, frames, LFMs, and their pose estimation point-to-point errors are employed to construct the first-level graph, which is a weighted graph. By utilizing the first graph, we derive the initial values of relative poses between a predefined anchor frame and each non-anchor frame through propagation along the shortest paths determined by Dijkstra's algorithm. Subsequently, we construct the second-level graph as a factor graph. The variables include 1) LFM poses, 2) frame poses, and 3) LFM corner positions. The factors represent 1) measurements of LFM poses, 2) measurements of LFM corner positions, 3) LFM corner positions in the LFM coordinate system, 4) the anchor frame pose, and 5) relative poses between frames. Leveraging the initial values from the first-level graph and LFM detection, we obtain the final result by optimizing all the variables in the second graph through GTSAM. We use the Livox MID-40 LiDAR to conduct experiments, demonstrating the significant superiority of our approach in 3D reconstruction quality and localization accuracy compared to previous methods, including Livox Mapping and LOAM Livox. Moreover, only our method is applicable to scenes with few natural features. fig1

Instance Reconstruction Evaluation

We use the same rosbag to evaluate the proposed method against Livox Mapping and LOAM Livox. The rosbag was recorded in a garage. The LiDAR follows an elliptical trajectory to scan a vehicle (Mercedes-Benz GLB). Please refer to the following videos for the details of the rosbag.

  • This video (×24) shows the mapping procedure of Livox Mapping.
    sdk

  • This video (×24) shows the mapping procedure of LOAM Livox.
    loam

  • This video shows the pipeline of our framework.
    ourspipeline

  • Ground Truth (Mercedes-Benz GLB).
    ezgif-1-375ecca334
    In the following, the point clouds are normalized into a unit sphere (i.e. [-1,1]). The first metric is the Chamfer Distance (CD). Given two point sets, the CD is the sum of the squared distance of each point to the nearest point in the other point set:
    image
    That is, a smaller CD value indicates a higher fidelity.
    The second metric is the recall of the ground truth points from the reconstructed shape, which is defined as: image
    A higher Recall indicates a higher fidelity.

  • Ours.
    CD: 0.0030. Recall: 96.22%
    ezgif-1-3eaeb864b3

  • Livox Mapping
    CD: 0.0106. Recall: 78.8264%
    ezgif-1-5774539aa0

  • LOAM Livox
    CD: 0.0335. Recall: 75.2704%
    ezgif-5-417764747f

Mapping and Localization Evaluation

  • While the quality of instance reconstruction indirectly reflects the localization accuracy, we also directly compare the localization accuracy of different methods. The following figure shows the setup. The ground truth trajectory is given by an OptiTrack Motion Capture (MoCap) system.

  • The following video shows the sampled rosbag. Again, the same rosbag is provided for the three methods.
    lab
  • This video (×24) shows the mapping procedure of Livox Mapping.
    sdk
  • This video (×24) shows the mapping procedure of LOAM Livox.
    loam
  • This video (×24) shows the mapping result of our method.
    ours
    This figure shows the comparison of the mapping results. (a): ours. (b): Livox Mapping (c): LOAM Livox.
    labpic
    This figure shows the comparison in terms of localization accuracy.

Requirements

  • GTSAM
    First, please ensure you can run the python demos given by GTSAM.
    We found that GTSAM cannot be installed appropriately in a conda environment. Thus, a conda environment is not recommended. Otherwise, errors will be reported when you use BearingRangeFactor3D or BetweenFactorPose3.

  • IILFM
    A light and insertable version of IILFM is included in the files. To build and run it, you need to install basic tools like cmake.
    If you'd like to give it a quick try, just follow the commands below. However, please note that the default settings correspond to the test reconstructing the scene of a lab from 11 frames with added AprilTags. (i.e. the default detector is an AprilTag detector and the marker size is 16.4cm × 16.4cm). If you want to try the ArUco demo (marker size: 69.2cm × 69.2cm), which involves reconstructing a vehicle, you will need to adjust the settings of the detector and marker size as guided by the readme in iilfm. Also, do not forget to change in marker size in main.py.

  • Python Packages
    The following packages are required.
    AprilTag
    opencv-python
    networkx
    open3d
    scipy
    matplotlib
    opencv-python(cv2) has a built-in ArUco detector. Please ensure you can run the python demos of ArUco detection. Again, a conda environment is not recommended.

Resources

  • Raw rosbag of the instance reconstruction evaluation.
  • Raw rosbag of the mapping and localization evaluation.
  • The extracted point clouds of vehicles and the script to run the quantitative evaluation are available here.
  • The LiDAR frames utilized by our method. You need to download the LiDAR frames, rename the interested folder name to 'pc', and put it in the same level directory as the 'main.py'.

Commands

git clone https://github.com/yorklyb/LiDAR-SFM.git
cd LiDAR-SFM
cd lidar-sfm
cd iilfm
mkdir build
cd build
cmake ..
make
mv tag_detection ../../
Ensure that the point clouds are named '1.pcd', '2.pcd', etc., and are placed into a folder named 'pc'. The 'pc' folder should be located in the same level directory as the 'main.py' file.
python3 main.py
After processing all the point clouds, you will see a graph plot. Close it by pressing 'q' or using the close button. Finally, an output file named 'out.pcd' will be generated.

How to collect your own LiDAR frames

First, you need to record the rostopic of the point cloud as rosbags. If you are using Livox MID-40, run rosbag record /livox/lidar in the terminal while the Livox-ros-driver is running. Then, assume that you placed the LiDAR at N viewpoints and obtained N rosbags. You need to put all of them in a folder named 'data'. Check this out as an example.
Suppose that you have done git clone https://github.com/yorklyb/LiDAR-SFM.git
cd LiDAR-SFM
cd merge
mkdir build
cd build
cmake ..
make
Put process.py into build. Also, put 'data' into build.
Open a terminal and run roscore. Open a new terminal and run python3 process.py. You will find the rosbags are transformed into pcd files in the folder 'processed'. Rename the folder as 'pc'.
If you are using other LiDAR models, you need to change the rostopic name when recording rosbags using rosbag record your_topic_name and also update the topic in process.py accordingly.