Skip to content

Official implementation of "Stereo Depth from Events Cameras: Concentrate and Focus on the Future" (CVPR 2022)

Notifications You must be signed in to change notification settings

yonseivnl/se-cff

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SE-CFF

[S]tereo depth from [E]vents Cameras: [C]oncentrate and [F]ocus on the [F]uture

This is an official code repo for "Stereo Depth from Events Cameras: Concentrate and Focus on the Future" CVPR 2022 Yeong-oo Nam*, Mohammad Mostafavi*, Kuk-Jin Yoon and Jonghyun Choi (Corresponding author)

If you use any of this code, please cite both following publications:

@inproceedings{nam2022stereo,
  title     =  {Stereo Depth from Events Cameras: Concentrate and Focus on the Future},
  author    =  {Nam, Yeongwoo and Mostafavi, Mohammad and Yoon, Kuk-Jin and Choi, Jonghyun},
  booktitle =  {Proceedings of the IEEE/CVF Conference on Computer Vision and Patter Recognition},
  year      =  {2022}
}
@inproceedings{mostafavi2021event,
  title     =  {Event-Intensity Stereo: Estimating Depth by the Best of Both Worlds},
  author    =  {Mostafavi, Mohammad and Yoon, Kuk-Jin and Choi, Jonghyun},
  booktitle =  {Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages     =  {4258--4267},
  year      =  {2021}
}

Maintainers

Table of contents

Pre-requisite

The following sections list the requirements for training/evaluation the model.

Hardware

Tested on:

  • CPU - 2 x Intel(R) Xeon(R) Silver 4210R CPU @ 2.40GHz
  • RAM - 256 GB
  • GPU - 8 x NVIDIA A100 (40 GB)
  • SSD - Samsung MZ7LH3T8 (3.5 TB)

Software

Tested on:

Dataset

Download DSEC datasets.

📂 Data structure

Our folder structure is as follows:

DSEC
├── train
│   ├── interlaken_00_c
│   │   ├── calibration
│   │   │   ├── cam_to_cam.yaml
│   │   │   └── cam_to_lidar.yaml
│   │   ├── disparity
│   │   │   ├── event
│   │   │   │   ├── 000000.png
│   │   │   │   ├── ...
│   │   │   │   └── 000536.png
│   │   │   └── timestamps.txt
│   │   └── events
│   │       ├── left
│   │       │   ├── events.h5
│   │       │   └── rectify_map.h5
│   │       └── right
│   │           ├── events.h5
│   │           └── rectify_map.h5
│   ├── ...
│   └── zurich_city_11_c                # same structure as train/interlaken_00_c
└── test
    ├── interlaken_00_a
    │   ├── calibration
    │   │   ├── cam_to_cam.yaml
    │   │   └── cam_to_lidar.yaml
    │   ├── events
    │   │   ├── left
    │   │   │   ├── events.h5
    │   │   │   └── rectify_map.h5
    │   │   └── right
    │   │       ├── events.h5
    │   │       └── rectify_map.h5
    │   └── interlaken_00_a.csv
    ├── ...
    └── zurich_city_15_a                # same structure as test/interlaken_00_a

Getting started

Build docker image

git clone [repo_path]
cd event-stereo
docker build -t event-stereo ./

Run docker container

docker run \
    -v <PATH/TO/REPOSITORY>:/root/code \
    -v <PATH/TO/DATA>:/root/data \
    -it --gpus=all --ipc=host \
    event-stereo

Build deformable convolution

cd /root/code/src/components/models/deform_conv && bash build.sh

Training

cd /root/code/scripts
bash distributed_main.sh

Inference

cd /root/code
python3 inference.py \
    --data_root /root/data \
    --checkpoint_path <PATH/TO/CHECKPOINT.PTH> \
    --save_root <PATH/TO/SAVE/RESULTS>

Pre-trained model

⚙️ You can download pre-trained model from here

What is not ready yet

Some modules introduced in the paper are not ready yet. We will update it soon.

  • Intensity image pre-processing code.
  • E+I Model code.
  • E+I train & test code.
  • Future event distillation code.

Benchmark website

The DSEC website holds the benchmarks and competitions.

🚀 Our CVPR 2022 results (this repo), are available in the DSEC website. We ranked better than the state-of-the-art method from ICCV 2021

🚀 Our ICCV 2021 paper Event-Intensity Stereo: Estimating Depth by the Best of Both Worlds ranked first in the CVPR 2021 Competition hosted by the CVPR 2021 workshop on event-based vision and the Youtube video from the competition.

Related publications

License

MIT license.