This is the repository of recording Omni-MOT dataset. Its functions include:
- Camera Operations: Move forward backward, pitch up down, roll left right, yaw left right
- Save camera parameters
- Calculate the ground truth data for multiple object tracking (i.e. 3D bounding boxes, 2D bounding boxes, Visibility)
- Save Videos and Ground Truth
-
CARLA 0.9.4, refer to [CARLA Docs]
-
>= Ubuntu 16.0
-
clone this repository into your local disk
git clone https://github.com/shijieS/OMOTDRecorder.git
-
Install the required packages
cd <this repository> pip install -r requirement.txt
-
make sure all the required packages are installed and your python environment is activated.
- launch CARLA Simulator
cd <CARLA Project>
./CarlaUE4.sh
-
set camera by running
python start_recording.py --config_name=test.json --auto_save=False --flag_show_windows=True --recording_num_scale=0
or
./select_viewpoint.sh
use the short cut key to move camera and fix its viewpoint.
-
modify the configure files as you want, in our example, we use test.json which is generated by "Setup Camera"
{ "cameras": { "Easy_Camera_Cross1": { "fov": 90.0, "height": 1080, "max_record_frame": 10, "pitch": -42.999786376953125, "roll": 1.0506457329029217e-05, "width": 1920, "x": -79.3550796508789, "y": -0.8948922753334045, "yaw": 42.000396728515625, "z": 32.788326263427734 }, "Hard_Town3_Test_00": { "fov": 90.0, "height": 1080, "max_record_frame": 10000, "pitch": 0.0, "roll": 0.0, "width": 1920, "x": -78.0, "y": 8.300000190734863, "yaw": 0.0, "z": 39.0 } }, "host": "127.0.0.1", "mode": "parallel", "port": 2000, "save_root": "/home/ssj/Data/github/AwesomeMOTDataset/Dataset/Temp", "vehicle_num": [ 30, 45, 50 ], "weathers": [ "Clear", "Cloudy", "Rain" ] }
-
Run a script
python start_recording.py --config_name=test.json --auto_save=True --flag_show_windows=True --recording_num_scale=0.5
or
./start_recording.sh
Key | Action |
---|---|
Esc | Quit |
Insert | Insert a new camera |
Enter | Save all camera parameters |
Tab | Switch the Cameras |
1 | Start Image Processing |
2 | Show 3D Bounding Boxes in the Image |
3 | Show Rectangle in the Image |
4 | Show Vehicle Labels |
w | Move Forward |
s | Move Backward |
a | Move Left |
d | Move Right |
q | Move Up |
e | Move Down |
i | Pitch Up |
k | Pitch Down |
j | Roll Left |
l | Roll Right |
u | Yaw Left |
p | Yaw Right |
Home | Update the Camera Level |
@inproceedings{ShiJie20,
author = {Shijie Sun, Naveed Aktar, XiangYu Song, Huansheng Song, Ajmal Mian, Mubarak Shah},
title = {Simultaneous Detection and Tracking with Motion Modelling for Multiple Object Tracking},
booktitle = {Proceedings of the European conference on computer vision (ECCV)}},
year = {2020}
@inproceedings{Dosovitskiy17,
title = {{CARLA}: {An} Open Urban Driving Simulator},
author = {Alexey Dosovitskiy and German Ros and Felipe Codevilla and Antonio Lopez and Vladlen Koltun},
booktitle = {Proceedings of the 1st Annual Conference on Robot Learning},
pages = {1--16},
year = {2017}
}
This work is based on the CARLA . It is also inspired by SSD and DAN.
CARLA specific code is distributed under MIT License.
CARLA specific assets are distributed under CC-BY License.
Note that UE4 itself follows its own license terms.