Skip to content

Releases: Owen-Liuyuxuan/visualDet3D

Pretrained unofficially implementation of Digging Mono3D

11 Dec 02:42
Compare
Choose a tag to compare

We provide an Unofficial re-implementation of Digging Into Output Representation For Monocular 3D Object Detection (Digging_M3D) to introduce a simple but important numerical trick to significantly improve the KITTI mAP scores and make a significant change to the KITTI leaderboard. Details can be found in the paper. At the time of the open-source, the paper has not been officially published, and we will keep up with the update of the paper.

Pretrained model for YOLOStereo3D

18 Mar 01:55
Compare
Choose a tag to compare

Pretrained model for Ground-aware Monocular 3D Object Detection for Autonomous Driving.

the model file could be placed under workdirs/Stereo3D/checkpoint/ (you should provide the path to the model file in the command line)

anchor_mean/std_Car/Pedestrian.npy should be placed under workdirs/Yolo3D/output/training. You can reproduce the npy file with the scripts runned on the 'test split'.

Backward Incompatibility:

We update the function for converting between observation angle (alpha) and 3D rotation angle (theta) following the more accurate version from RTM3D. It will break the result of the models in the previous release.

And we have to retrain a new YOLOStereo3D model to adapt to this change. So the released model performs slightly differently from the KITTI one.

Notice:
To get similar performance on the test-split, you need to train for more epochs (80 epochs test for example), while you only need about 50 epochs to get a saturated performance on validation split (empirically with the current learning rate settings).

Benchmark Easy Moderate Hard
Car Detection 94.75 % 84.50 % 62.13 %
Car Orientation 93.65 % 82.88 % 60.92 %
Car 3D Detection 65.77 % 40.71 % 29.99 %
Car Bird's Eye View 74.00 % 49.54 % 36.30 %
Pedestrian Detection 58.34 % 49.54 % 36.30 %
Pedestrian Orientation 50.41 % 36.81 % 31.51 %
Pedestrian 3D Detection 31.03 % 20.67 % 18.34 %
Pedestrian Bird's Eye View 32.52 % 22.74 % 19.16 %

pretrained model for GAC

01 Feb 08:16
Compare
Choose a tag to compare

Pretrained model for Ground-aware Monocular 3D Object Detection for Autonomous Driving.

the model file could be placed under workdirs/Yolo3D/checkpoint/ (you should provide the path to the model file in the command line)

anchor_mean/std_Car.npy should be placed under workdirs/Yolo3D/output/training. You can reproduce the npy file with the scripts runned on the 'test split'.

Benchmark Easy Moderate Hard
Car Detection 92.35 % 79.57 % 59.61 %
Car Orientation 90.87 % 77.47 % 57.99 %
Car 3D Detection 21.60 % 13.17 % 9.94 %
Car Bird's Eye View 29.38 % 18.00 % 13.14 %