Skip to content

A set of estimation, visualization, and evaluation modules for performing lidar-lidar calibration for autonomous vehicles and other robotic systems. This framework was validated/built for analyzing point cloud and odometry data from the DARPA SubT Challenge.

License

Notifications You must be signed in to change notification settings

rmsander/lidar-lidar-cal

Repository files navigation

Lidar Calibration Exploratory Tools

This repository contains tools for lidar calibration via a hand-eye framework, as well as tools for building and visualizing calibrated point clouds. Specifically, these tools include sub-modules for:

  1. Computing and estimating relative transformations between different lidar sensors (written in python).
  2. Evaluating the quality of these calibration sensors (written in python and c++).
  3. Plotting and visualizing point clouds (written in c++ using pcl and octomap).
  4. Visualizing relative poses, both statically and dynamically (written in MATLAB).

Installation

Point Cloud Visualization and Evaluation (C++)

In order to see the cumulative point clouds, you need to install octomap:

sudo apt-get install ros-melodic-octomap-server                                                
sudo apt-get install ros-melodic-octomap   

We also need the pcl library:

sudo apt-get install libpcl-dev
sudo apt-get install pcl-tools

(This works on Ubuntu 18.04. For older versions other instructions are necessary because I don't think libpcl was part of the distro repo.

The directory lidar_pointcloud_analysis is a ROS package that needs to be installed in your catkin workspace.

Relative Pose Estimation and Analysis (Python)

To install all Python dependencies, you can so with either conda or pip:

  • Conda: conda env create -f python_env/environment.yml
  • Pip: pip install -r python_env/requirements.txt

Pose Visualization (MATLAB)

To visualize resulting poses, please have a working installation of MATLAB.

Recommended Workflow

Though we aim for these modules to be largely standalone and modular, we recommend the following workflow for pre-processing, estimating relative transformations, evaluating point cloud calibration quantity, and visualizing point clouds.

Note:

  • Steps 1-4 are for estimating the relative transformation between lidar sensors, and performing internal evaluation of these transformations.
  • Steps 5-6 are for analyzing and evaluating the quality of these transformations through perturbation analysis and lidar point cloud visualization.
  • Step 7 is for visualizing the estimated relative transformations between lidar sensors using MATLAB.

Steps:

  1. Convert the rosbag datasets into CSVs using the rostopic echo command below.
  2. Run calibration_optimization/clean_csvs.py to pre-process the dataset and clean the column data. See this file below for instructions on how to run. If you would like to examine the cross-correlation of the odometry measurements over time, you can do so by running calibration_optimization/timing_analysis.py
  3. Run calibration_optimization/extract_yaml.py to compute initial estimates for relative transformations between the lidar sensors on the Husky robots. See this file below for instructions on how to run.
  4. Run calibration_optimization/analysis.py. If you need to adjust the optimization or evaluation sub-routines, e.g. the cost function or weighting scheme, this can be done by modifying the functions under calibration_optimization/analysis_utils.py.
  5. To visualize the lidar scans (currently configured for running 3 rosbags at a time), you can do so by running our ROS launch file lidar_pointcloud_anlaysis/runbags.launch.
  6. To run further analysis for evaluating point cloud consistency, you can run the following scripts (please make sure you are currently running the previous launch file), all under lidar_pointcloud_analysis/src/. See the specific notes for each of these files below under "Point Cloud Validation".
    • analyze_perturbed.cpp
    • analyze_single.cpp
    • build_for_analysis.cpp
    • build_perturbed.cpp
    • ros_to_pcds.cpp
    • save_octomap.cpp
    • utils.cpp
  7. To visualize the relative poses between the lidar sensors, you can do so using MATLAB_vis/drawtriad.m. Please check the InitandFinalVizUNWEIGHTED.m and InitandFinalVizWEIGHTED.m files to make sure the estimates match the transformations being plotted.

File Overview

The transform optimization process assumes that the rosbags have already been turned into csv files. This can be done with the following command:

rostopic echo -b raw_bag1.bag -p /husky4/lo_frontend/odometry > csv_file.csv

This command must be run once for each bag. As the column names get really ugly, the second step is to run clean_csvs.py which will make the files a little easier to work with. However, not that you will probably have to edit the desired filenames in the python script.

clean_csvs.py

Generates csvs for the lidar odometry with sensible header names: time, x, y, z, qx, qy, qz, qw. The pose covariance is represented as cov_pose_0 through cov_pose_35 in row major order (although we are still a little unclear on the significance of this covariance.

To run (please make sure you run the rostopic command above first):

cd calibration_optimization
python3 clean_csvs.py

analysis.py

Formulates and solves an optimization problem to find the maximum likelihood estimate for the lidar-lidar transform. See the section below "Running the Analysis" for detailed information on command-line parameters and theory behind our estimation approach.

To Run: See section below "Running the Analysis".

extract_pose_yaml.py

Parses the yaml file that stores robot sensor calibration values into homogeneous transformation matrices.

To Run:

cd calibration_optimization
python3 extract_pose_yaml.py

timing_analysis.py

Runs cross-correlation of angular velocity each each lidar frame to find time offset between sensors. Based on our preliminary works, there doesn't seem to be much of an offset. To Run:

cd calibration_optimization
python3 timing_analysis.py

relative_pose_processing.py

A variety of utility functions to process the data. Includes retiming and alignment, quaternion multiplication, calculating angular velocity from pairs of quaternions, and turning the dataframes into lists of relative poses.

Relative Pose Estimation and Analysis

To estimate optimal relative poses between the lidar frames, we use python and pymanopt for running iterative gradient-based optimizations over SE(3) to estimate relative pose transformations that minimize the distance to the observed pose transformations.

Hand-Eye Calibration Problem

This manifold optimization framework solves the problem of estimating the relative transform between two coordinate systems, i.e. two lidar sensors. We use the framework below to compute an optimal estimate of the relative transform:

Hand-Eye Calibration Problem

Mathematically, this can be computed via the following optimization: Hand-Eye Calibration Problem

Running the Analysis

To run this analysis, run:

conda activate darpa_subt  // If using conda environment - else ignore this command 
cd calibration_optimization
python3 analysis.py --weighted --use_xval --reject_threshold 100 --kfolds 10

Where the command-line parameters are given by:

  1. weighted: Whether to use weighted optimization to account for noise in rotation and translation.

  2. reject_threshold: Value of maximum ICP diagonal with which to reject a sample.

  3. kfolds: If cross-validation is enabled, this is the number of cross-validation folds to use for it.

  4. use_xval: Whether to use cross-validation when fitting the model and calculating relative pose estimates.

This will:

  1. Estimate the relative pose transformation between: (i) velodyne and velodyne_front, (ii) velodyne and velodyne_rear, and (iii)) velodyne_front and velodyne_rear. These estimates are provided an initial guess given by the configuration file husky4_sensors.yaml, from which we have extracted relative pose transformations between frames.

  2. Save the final estimates of these relative pose transformations, represented as 4 x 4 matrices, to the directory calibration_optimization/results/final_estimates_unweighted_{True, False}, where the suffix of this directory depends on whether you use weighted estimates to optimize the system (a flag in analysis.py).

  3. Compute the Root Mean Squared Error (RMSE) of the relative (i) pose transformation, (ii) rotation, and (iii) translation. This is carried out for both the initial and final estimates. These results for each pairwise combination of sensors can be found under calibration_optimization/results/analysis_results_unweighted_{True, False}, again depending on whether or not the samples are weighted during training.

  4. Generate plots of the odometry for all three sensors. These plots are saved to calibration_optimization/plots/.

  5. Generate plots of the ICP covariance diagonal elements over time for each sensor. These are saved in calibration_optimization/plots/icp_plots/ (or calibration_optimization/plots/icp_plots_clipped), if the maximum value is clipped.

  6. If cross-validation is enabled, the results will be stored in a new (or existing) directory given by calibration_optimization/results/cross_validation_folds=K, where K is the number of folds.

Utility functions for running the script in analysis.py can be found in calibration_optimization/analysis_utils.py.

Point Cloud Validation

Currently, the commandline arguments for these scripts may not be readily apparent and you may need to look at the source code to understand how to use them. I hope to add better documentation about each executable in the future.

Running

Currently, the package is set up to specifically three lidar bags at a time. The runbags.launch file will launch three bags, assumed to be from main, front, and rear lidar. It also sets up static transform publishers based on the hand-tuned calibration from JPL. After running this, it should be possible to visualize the lidar scans in rviz. It also starts an octomap server for each lidar, which can also be visualized in rviz.

The following scripts are built by the ROS package:

save_octomap_cloud

While visualizing live data in ROS can be nice, we have found it useful to save the data outside of ROS to allow for repeatable and batch processing. This utility will save the current octomaps to .pcd files (which can easily be loaded into PCL.

ros_to_pcds

This utility saves each individual point scan from a lidar to a .pcd file. This is useful if you want to load the lidar scans directly into PCL and don't want to have to keep dealing with ROS. Currently configured to save scans from each lidar at 2 hz. Also saves the pose of the main lidar at each time step, so that the lidar scans can all be put back into the same "global" reference frame.

build_for_analysis

Builds a point cloud consisting of many individual scans. Performs voxel downsampling. Currently an issue with the PCL library limits the voxel resolution. It's about 5cm on this dataset, but may be larger or smaller on other datasets (it's related to the bounding box volume of the data I believe).

build_perturbed

Utility to test the point cloud consistency of slightly perturbed lidar calibrations. Generates slight rotation perturbations to the calibration matrix. The resulting evaluation of point cloud error is performed by the next utility.

analyze_perturbed

See above for documentation on this file.

analyze_single

Evaluate the point cloud consistency between point clouds. Ideally, this function should be improved to use feature-based point cloud matching instead of naive point-point distances.

Octomap / PCL Tools

Running rosrun final save_octomap_cloud generates three point clouds, one for each lidar. These are called pc_main.pcd, pc_front.pcd and pc_rear.pcd. We can visualize these point clouds with pcl_viewer with the command:

pcl_viewer -multiview 1 pc_main.pcd pc_front.pcd pc_rear.pcd

or

pcl_viewer pc_main.pcd pc_front.pcd pc_rear.pcd

The former command displays the clouds separately, and the latter displays them on top of each other.

Technical Details

If you're interested in any of the technical details behind this framework, please take a look at our slide deck in slide_deck.pdf and our paper in paper.pdf.

Acknowledgements

We would like to thank Professor Luca Carlone and the VNAV staff for support on this project. Additionally, we thank Benjamin Morrell and Fadhil Ginting for providing data and introducing us to the DARPA Sub-T challenge. Finally, we would like to thank Mike Schoder and Phil Cotter for collaborating and coordinating with us on this project.

Citation

If you find this project useful, please consider citing our paper:

@techreport{wenberg-ray-sander-2020,
author = {Ryan Sander and Dakota Wenberg and Aaron Ray},
year = {2020},
month = {12},
pages = {},
title = {Lidar-Lidar Hand-Eye Calibration via Optimization on SE(3)},
url = {https://github.com/rmsander/lidar-lidar-cal},
doi = {10.13140/RG.2.2.25464.67840}
}

About

A set of estimation, visualization, and evaluation modules for performing lidar-lidar calibration for autonomous vehicles and other robotic systems. This framework was validated/built for analyzing point cloud and odometry data from the DARPA SubT Challenge.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published