Skip to content

ShengcaiLiao/TransMatcher

Repository files navigation

TransMatcher

TransMatcher: Deep Image Matching Through Transformers for Generalizable Person Re-identification

This is the official PyTorch code for the TransMatcher proposed in our paper [1].

For further details, please read our paper, and a poster here.

Usage

It is based on the QAConv 2.0 code, and the requirements and usage are quite similar. For a quick run, please try the demo.sh. Ignore the accuracy of this demo, since it is only for validating that everything is OK to run.

Performance

Performance (%) of TransMatcher under direct cross-dataset evaluation without transfer learning or domain adaptation:

Training Data Method CUHK03-NP Market-1501 MSMT17
Rank-1 mAP Rank-1 mAP Rank-1 mAP
Market QAConv 2.0 16.4 15.7 - - 41.2 15.0
TransMatcher 22.2 21.4 - - 47.3 18.4
MSMT QAConv 2.0 20.0 19.2 75.1 46.7 - -
TransMatcher 23.7 22.5 80.1 52.0 - -
MSMT (all) QAConv 2.0 27.2 27.1 80.6 55.6 - -
TransMatcher 31.9 30.7 82.6 58.4 - -
RandPerson QAConv 2.0 14.8 13.4 74.0 43.8 42.4 14.4
TransMatcher 17.1 16.0 77.3 49.1 48.3 17.7

Contacts

Shengcai Liao
Inception Institute of Artificial Intelligence (IIAI)
shengcai.liao@inceptioniai.org

Citation

[1] Shengcai Liao and Ling Shao, "TransMatcher: Deep Image Matching Through Transformers for Generalizable Person Re-identification." In Neural Information Processing Systems (NeurIPS), 2021.

@article{Liao-NeurIPS2021-TransMatcher,
  author    = {Shengcai Liao and Ling Shao},
  title     = {{TransMatcher: Deep Image Matching Through Transformers for Generalizable Person Re-identification}},
  booktitle = {Neural Information Processing Systems (NeurIPS)},  
  year={2021}
}