You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am running VIO on a custom dataset but it is diverging to infinity. My scenario is that I have a drone with a downward facing camera. I am running this drone using PX4, so I am attempting to convert the PX4 data messages of VehicleAcceleration, VehicleAttitude, and VehicleAngularVelocity to sensor_msg/Imu for OpenVINS. I set the covariance to 0 for each of these measurements.
Here is my estimator_config.yaml:
%YAML:1.0 # need to specify the file type at the top!
verbosity: "ALL"
use_fej: true # if first-estimate Jacobians should be used (enable for good consistency)
integration: "rk4" # discrete, rk4, analytical (if rk4 or analytical used then analytical covariance propagation is used)
use_stereo: false # if we have more than 1 camera, if we should try to track stereo constraints between pairs
max_cameras: 1 # how many cameras we have 1 = mono, 2 = stereo, >2 = binocular (all mono tracking)
calib_cam_extrinsics: true # if the transform between camera and IMU should be optimized R_ItoC, p_CinI
calib_cam_intrinsics: true # if camera intrinsics should be optimized (focal, center, distortion)
calib_cam_timeoffset: true # if timeoffset between camera and IMU should be optimized
calib_imu_intrinsics: false # if imu intrinsics should be calibrated (rotation and skew-scale matrix)
calib_imu_g_sensitivity: false # if gyroscope gravity sensitivity (Tg) should be calibrated
max_clones: 11 # how many clones in the sliding window
max_slam: 50 # number of features in our state vector
max_slam_in_update: 25 # update can be split into sequential updates of batches, how many in a batch
max_msckf_in_update: 40 # how many MSCKF features to use in the update
dt_slam_delay: 1 # delay before initializing (helps with stability from bad initialization...)
gravity_mag: 9.81 # magnitude of gravity in this location
feat_rep_msckf: "GLOBAL_3D"
feat_rep_slam: "ANCHORED_MSCKF_INVERSE_DEPTH"
feat_rep_aruco: "ANCHORED_MSCKF_INVERSE_DEPTH"
try_zupt: false
zupt_chi2_multipler: 0 # set to 0 for only disp-based
zupt_max_velocity: 0.1
zupt_noise_multiplier: 10
zupt_max_disparity: 0.5 # set to 0 for only imu-based
zupt_only_at_beginning: false
init_window_time: 2.0 # how many seconds to collect initialization information
init_imu_thresh: 2.0 # threshold for variance of the accelerometer to detect a "jerk" in motion
init_max_disparity: 10.0 # max disparity to consider the platform stationary (dependent on resolution)
init_max_features: 50 # how many features to track during initialization (saves on computation)
init_dyn_use: false # if dynamic initialization should be used
init_dyn_mle_opt_calib: false # if we should optimize calibration during intialization (not recommended)
init_dyn_mle_max_iter: 50 # how many iterations the MLE refinement should use (zero to skip the MLE)
init_dyn_mle_max_time: 0.05 # how many seconds the MLE should be completed in
init_dyn_mle_max_threads: 6 # how many threads the MLE should use
init_dyn_num_pose: 6 # number of poses to use within our window time (evenly spaced)
init_dyn_min_deg: 10.0 # orientation change needed to try to init
init_dyn_inflation_ori: 10 # what to inflate the recovered q_GtoI covariance by
init_dyn_inflation_vel: 100 # what to inflate the recovered v_IinG covariance by
init_dyn_inflation_bg: 10 # what to inflate the recovered bias_g covariance by
init_dyn_inflation_ba: 100 # what to inflate the recovered bias_a covariance by
init_dyn_min_rec_cond: 1e-12 # reciprocal condition number thresh for info inversion
init_dyn_bias_g: [ 0.0, 0.0, 0.0 ] # initial gyroscope bias guess
init_dyn_bias_a: [ 0.0, 0.0, 0.0 ] # initial accelerometer bias guess
record_timing_information: false # if we want to record timing information of the method
record_timing_filepath: "/tmp/traj_timing.txt" # https://docs.openvins.com/eval-timing.html#eval-ov-timing-flame
use_klt: true # if true we will use KLT, otherwise use a ORB descriptor + robust matching
num_pts: 800 # number of points (per camera) we will extract and try to track
fast_threshold: 20 # threshold for fast extraction (warning: lower threshs can be expensive)
grid_x: 5 # extraction sub-grid count for horizontal direction (uniform tracking)
grid_y: 5 # extraction sub-grid count for vertical direction (uniform tracking)
min_px_dist: 10 # distance between features (features near each other provide less information)
knn_ratio: 0.80 # descriptor knn threshold for the top two descriptor matches
track_frequency: 21.0 # frequency we will perform feature tracking at (in frames per second / hertz)
downsample_cameras: false # will downsample image in half if true
num_opencv_threads: 4 # -1: auto, 0-1: serial, >1: number of threads
histogram_method: "HISTOGRAM" # NONE, HISTOGRAM, CLAHE
use_aruco: false
num_aruco: 1024
downsize_aruco: true
up_msckf_sigma_px: 50
up_msckf_chi2_multipler: 10
up_slam_sigma_px: 50
up_slam_chi2_multipler: 10
up_aruco_sigma_px: 50
up_aruco_chi2_multipler: 10
use_mask: false
relative_config_imu: "kalibr_imu_chain.yaml"
relative_config_imucam: "kalibr_imucam_chain.yaml"
Just right off the bat, all your matrices are identity.... which they should not after performing the Kalibr steps correctly. Maybe double-check again?
Not all of them, I assumed that there is no rotation and this can be further calibrated online. I adjusted my new kalibr_imucam_chain.yaml file:
`
%YAML:1.0
cam0:
T_imu_cam: #rotation from camera to IMU R_CtoI, position of camera in IMU p_CinI
- [0.0, -1.0, 0.0, 0.0]
- [-1.0, 0.0, 0.0, 0.0]
- [0.0, 0.0, -1.0, 0.0]
- [0.0, 0.0, 0.0, 1.0]
cam_overlaps: []
camera_model: pinhole
distortion_coeffs: [0, 0, 0, 0]
distortion_model: radtan
intrinsics: [893.55554717, 891.64586627, 988.19643079, 559.1368329] #fu, fv, cu, cv
resolution: [1920, 1080]
rostopic: /new_image
`
**My Scenario: ** I have a drone that must travel at fast speeds. The drone has an IMU pointing forward and a camera pointing downwards looking at the ground.
It isn't clear to me how you are getting IMU information (angular velocity and linear accelerations). A reasonable calibration is needed for both the extrinsics, and intrinsics of both sensors (inertial and camera). There is a whole guide on how to do this here: https://docs.openvins.com/gs-calibration.html
Additionally, you need good timestamps for both sensors, from what I have seen of PX4, it isn't clear how you get the camera and IMU in the same clock frame.
If you are able to perform calibration of the camera + IMU pair, then you should expect to do state estimation, but if you are unable to do this then it isn't expected to be able to run VIO / SLAM on the sensors you have. I would try using Kalibr to do calibration first and go from there.
Hi,
I am running VIO on a custom dataset but it is diverging to infinity. My scenario is that I have a drone with a downward facing camera. I am running this drone using PX4, so I am attempting to convert the PX4 data messages of VehicleAcceleration, VehicleAttitude, and VehicleAngularVelocity to sensor_msg/Imu for OpenVINS. I set the covariance to 0 for each of these measurements.
Here is my estimator_config.yaml:
The kalibr_imu_chain.yaml:
The kalibr_imucam_chain.yaml:
@goldbattle or anyone that can help. Any suggestions given my scenario? Is ground truth messing the trajectory up?
The text was updated successfully, but these errors were encountered: