Skip to content

ShihaoShao-GH/SuperGlobal

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SuperGlobal

arXiv

PWC PWC

ICCV 2023 Paper Global Features are All You Need for Image Retrieval and Reranking Official Repository🚀🚀🚀

Image retrieval systems conventionally use a two-stage paradigm, leveraging global features for initial retrieval and local features for reranking. However, the scalability of this method is often limited due to the significant storage and computation cost incurred by local feature matching in the reranking stage. In this paper, we present SuperGlobal, a novel approach that exclusively employs global features for both stages, improving efficiency without sacrificing accuracy. SuperGlobal introduces key enhancements to the retrieval system, specifically focusing on the global feature extraction and reranking processes. For extraction, we identify sub-optimal performance when the widely-used ArcFace loss and Generalized Mean (GeM) pooling methods are combined and propose several new modules to improve GeM pooling. In the reranking stage, we introduce a novel method to update the global features of the query and top-ranked images by only considering feature refinement with a small set of images, thus being very compute and memory efficient. Our experiments demonstrate substantial improvements compared to the state of the art in standard benchmarks. Notably, on the Revisited Oxford+1M Hard dataset, our single-stage results improve by 7.1%, while our two-stage gain reaches 3.7% with a strong 64,865x speedup. Our two-stage system surpasses the current single-stage state-of-the-art by 16.3%, offering a scalable, accurate alternative for high-performing image retrieval systems with minimal time overhead.

Leveraging global features only, our series of methods contribute to state-of-the-art performance in ROxford (+1M), RParis (+1M), and GLD test set with orders-of-magnitude speedup.

News

9/14/2023 Evaluation code on 1M is released!

Demo

image

Results Reproduce

  1. Download Revisited Oxford & Paris from https://github.com/filipradenovic/revisitop, and save to path ./revisitop.

  2. Download CVNet pretrained weights from https://github.com/sungonce/CVNet, and save to path ./weights.

  3. Run

python test.py MODEL.DEPTH [50, 101] TEST.WEIGHTS ./weights TEST.DATA_DIR ./revisitop SupG.gemp SupG.rgem SupG.sgem SupG.relup SupG.rerank SupG.onemeval False

And you will get the exact reported results in log.txt.

Evaluation on 1M distractors

  1. Run python ./extract_rop1m.py --weight [path-to-weight] --depth [depth], and it gives you a .pth file in the current path.

  2. Run python test.py MODEL.DEPTH [50, 101] TEST.WEIGHTS ./weights TEST.DATA_DIR ./revisitop SupG.gemp SupG.rgem SupG.sgem SupG.relup SupG.rerank SupG.onemeval

  3. See results in log.txt.

Application

If you would like to try out our methods on other benchmarks or tasks, I recommend to go over ./modules in this repository, and plug in your desired modules. They are very easy to use, and can be directly attached to your trained model!

Acknowledgement

Many thanks to CVNet, DELG-pytorch, where we found resources to build our repository and they inspired us to have this work published!

Contact us

Feel free to reach out our co-corresponding authors at shaoshihao@pku.edu.cn, and bingyi@google.com.

About

ICCV 2023 Paper Global Features are All You Need for Image Retrieval and Reranking Official Repository

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages