Skip to content

SfM-TTR: Using Structure from Motion for Test-Time Refinement of Single-View Depth Networks

License

Notifications You must be signed in to change notification settings

serizba/SfM-TTR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SfM-TTR: Using Structure from Motion for Test-Time Refinement of Single-View Depth Networks

Code for refining depth estimation networks using COLMAP reconstructions.

Setup

Install required dependencies for SfM-TTR (for specific model dependencies check their corresponding repositories):

conda install pytorch==1.12 torchvision -c pytorch
conda install -c conda-forge statsmodels matplotlib yacs
conda install tqdm
pip install pytorch-lightning

This code is provided with the nested repositories of AdaBins, ManyDepth, CADepth and DIFFNet.

We provide the weights of DIFFNet to quickly test our method. Although for the rest of the networks all code is included, you need to manually download their weights. Once downloaded, place them in SfM-TTR/sfmttr/models/{model_name}/weights/.

Data

To quickly test our method, we included the input images, ground truth and sparse reconstruction of one scene within this code (SfM-TTR/example_sequence/).

To run and evaluate SfM-TTR with the complete KITTI dataset, please download the KITTI raw data and the KITTI ground truth. You also need to run COLMAP on each sequence to obtain a sparse reconstruction.

Running

You can run the provided example of SfM-TTR with:

python3 main.py \
  --kitti-raw-path ./example_sequence/kitti_raw/ \
  --kitti-gt-path ./example_sequence/kitti_gt  \
  --reconstruction-path ./example_sequence/colmap_reconstructions/ \
  --sequence 2011_09_26_drive_0002_sync

About

SfM-TTR: Using Structure from Motion for Test-Time Refinement of Single-View Depth Networks

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published