Skip to content

Pretrained model for YOLOStereo3D

Latest
Compare
Choose a tag to compare
@Owen-Liuyuxuan Owen-Liuyuxuan released this 18 Mar 01:55
· 14 commits to master since this release

Pretrained model for Ground-aware Monocular 3D Object Detection for Autonomous Driving.

the model file could be placed under workdirs/Stereo3D/checkpoint/ (you should provide the path to the model file in the command line)

anchor_mean/std_Car/Pedestrian.npy should be placed under workdirs/Yolo3D/output/training. You can reproduce the npy file with the scripts runned on the 'test split'.

Backward Incompatibility:

We update the function for converting between observation angle (alpha) and 3D rotation angle (theta) following the more accurate version from RTM3D. It will break the result of the models in the previous release.

And we have to retrain a new YOLOStereo3D model to adapt to this change. So the released model performs slightly differently from the KITTI one.

Notice:
To get similar performance on the test-split, you need to train for more epochs (80 epochs test for example), while you only need about 50 epochs to get a saturated performance on validation split (empirically with the current learning rate settings).

Benchmark Easy Moderate Hard
Car Detection 94.75 % 84.50 % 62.13 %
Car Orientation 93.65 % 82.88 % 60.92 %
Car 3D Detection 65.77 % 40.71 % 29.99 %
Car Bird's Eye View 74.00 % 49.54 % 36.30 %
Pedestrian Detection 58.34 % 49.54 % 36.30 %
Pedestrian Orientation 50.41 % 36.81 % 31.51 %
Pedestrian 3D Detection 31.03 % 20.67 % 18.34 %
Pedestrian Bird's Eye View 32.52 % 22.74 % 19.16 %