Skip to content

Latest commit

 

History

History
83 lines (64 loc) · 6.48 KB

README.md

File metadata and controls

83 lines (64 loc) · 6.48 KB

FCOS: Fully Convolutional One-Stage Object Detection

FCOS: Fully Convolutional One-Stage Object Detection;
Zhi Tian, Chunhua Shen, Hao Chen, and Tong He;
In: Proc. Int. Conf. Computer Vision (ICCV), 2019.

arXiv preprint arXiv:1904.01355

FCOS: A Simple and Strong Anchor-free Object Detector;
Zhi Tian, Chunhua Shen, Hao Chen, and Tong He;
IEEE T. Pattern Analysis and Machine Intelligence (TPAMI), 2021.

arXiv preprint arXiv:2006.09214

BibTeX

Installation & Quick Start

No special setup needed. The default instruction would work.

Models

COCO Object Detecton Baselines with FCOS

Name inf. time box AP box AP (test-dev) download
FCOS_R_50_1x 16 FPS 38.7 38.8 model
FCOS_MS_R_50_2x 16 FPS 41.0 41.4 model
FCOS_MS_R_101_2x 12 FPS 43.1 43.2 model
FCOS_MS_X_101_32x8d_2x 6.6 FPS 43.9 44.1 model
FCOS_MS_X_101_64x4d_2x 6.1 FPS 44.7 44.8 model
FCOS_MS_X_101_32x8d_dcnv2_2x 4.6 FPS 46.6 46.6 model

The following models use IoU (instead of "center-ness") to predict the box quality (setting MODEL.FCOS.BOX_QUALITY = "iou").

Name inf. time box AP download
FCOS_R_50_1x_iou 16 FPS 39.4 model
FCOS_MS_R_50_2x_iou 16 FPS 41.5 model
FCOS_MS_R_101_2x_iou 12 FPS 43.5 model
FCOS_MS_X_101_32x8d_2x_iou 6.6 FPS 44.5 model
FCOS_MS_X_101_32x8d_2x_dcnv2_iou 4.6 FPS 47.4 model

"MS": the models are trained with multi-scale data augmentation.

FCOS Real-time Models

Name inf. time box AP box AP (test-dev) download
FCOS_RT_MS_DLA_34_4x_shtw 52 FPS 39.1 39.2 model
FCOS_RT_MS_DLA_34_4x 46 FPS 40.3 40.3 model
FCOS_RT_MS_R_50_4x 38 FPS 40.2 40.2 model

If you prefer BN in FCOS heads, please try the following models.

Name inf. time box AP box AP (test-dev) download
FCOS_RT_MS_DLA_34_4x_shtw_bn 52 FPS 38.9 39.1 model
FCOS_RT_MS_DLA_34_4x_bn 48 FPS 39.4 39.9 model
FCOS_RT_MS_R_50_4x_bn 40 FPS 39.3 39.7 model

Inference time is measured on a NVIDIA 1080Ti with batch size 1. Real-time models use shorter side 512 for inference.

Disclaimer:

If the number of foreground samples is small or unstable, please set MODEL.FCOS.LOSS_NORMALIZER_CLS to "moving_fg", which is more stable than normalizing the loss with the number of foreground samples in this case.

Citing FCOS

If you use FCOS in your research or wish to refer to the baseline results, please use the following BibTeX entries.

@inproceedings{tian2019fcos,
  title     =  {{FCOS}: Fully Convolutional One-Stage Object Detection},
  author    =  {Tian, Zhi and Shen, Chunhua and Chen, Hao and He, Tong},
  booktitle =  {Proc. Int. Conf. Computer Vision (ICCV)},
  year      =  {2019}
}
@article{tian2021fcos,
  title   =  {{FCOS}: A Simple and Strong Anchor-free Object Detector},
  author  =  {Tian, Zhi and Shen, Chunhua and Chen, Hao and He, Tong},
  journal =  {IEEE T. Pattern Analysis and Machine Intelligence (TPAMI)},
  year    =  {2021}
}