Skip to content

zafirshi/PanoVPR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PanoVPR: Towards Unified Perspective-to-Equirectangular Visual Place Recognition via Sliding Windows across the Panoramic View

Ze Shi* · Hao Shi* · Kailun Yang · Zhe Yin · Yining Lin · Kaiwei Wang

Update

  • [2023-11] ⚙️ Code Release
  • [2023-08] 🎉 PanoVPR is accepted to 26th IEEE International Conference on Intelligent Transportation Systems (ITSC-2023).
  • [2023-03] 🚧 Init repo and release arxiv version

Introduction

We propose a Visual Place Recognition framework for retrieving Panoramic database images using perspective query images, dubbed PanoVPR. To achieve this, we adopt sliding window approach on panoramic database images to narrow the model's observation range of the large field of view panoramas. We achieve promising results in a derived dataset Pitts250K-P2E and a real-world scenario dataset YQ360.

For more details, please check our arXiv paper.

Sliding window strategy

Silding window

Qualitative results

CMNeXt

Usage

Dataset Preparation

Before starting, you need to download the Pitts250K-P2E dataset and the YQ360 dataset [OneDrive Link][BaiduYun Link].

If the link is out of date, please email office_makeit@163.com for the latest available link!

Afterwards, specify the --datasets_folder parameter in the parser.py file.

Setup

You need to first create an environment from file environment.yml using Conda, and then activate it.

conda env create -f environment.yml --prefix /path/to/env
conda activate PanoVPR

Train

If you want to train the network, you can change the training configuration and the dataset used by specifying parameters such as --backbone, --split_nums, and --dataset_name in the command line.

Meanwhile, adjust other parameters according to the actual situation. By default, the output results are stored in the ./logs/{save_dir} folder.

Please note that the --title parameter must be specified in the command line.

# Train on Pitts250K-P2E
python train.py --title  swinTx24 \
                --save_dir clean_branch_test \ 
                --backbone swin_tiny \
                --split_nums 24 \
                --dataset_name pitts250k \ 
                --cache_refresh_rate 125 \
                --neg_sample 100 \
                --queries_per_epoch 2000

Inference

For the inference process, you need to specify the absolute path where the best_model.pth is stored in the --resume parameter.

# Val and Test On Pitts250K-P2E
python test.py --title  test_swinTx24 \
                --save_dir clean_branch_test \ 
                --backbone swin_tiny \
                --split_nums 24  \
                --dataset_name pitts250k \
                --cache_refresh_rate 125 \
                --neg_sample 100 \
                --queries_per_epoch 2000 \
                --resume <absoulate path containing best_model.pth>

Acknowledgments

We thank the authors of the following repositories for their open source code:

Cite Our Work

Thanks for using our work. You can cite it as:

@INPROCEEDINGS{shi2023panovpr,
  author={Shi, Ze and Shi, Hao and Yang, Kailun and Yin, Zhe and Lin, Yining and Wang, Kaiwei},
  booktitle={2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC)}, 
  title={PanoVpr: Towards Unified Perspective-to-Equirectangular Visual Place Recognition via Sliding Windows Across the Panoramic View}, 
  year={2023},
  pages={1333-1340},
  doi={10.1109/ITSC57777.2023.10421857}
}

About

A Unified Perspective-to-Equirectangular Visual Place Recognition Framework

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages