Skip to content

Latest commit

 

History

History
25 lines (15 loc) · 796 Bytes

File metadata and controls

25 lines (15 loc) · 796 Bytes

May 2020

tl;dr: Improvement over SENet.

Overall impression

Channel attention module is very much like SENet but more concise. Spatial attention module concatenates mean pooling and max pooling across channels and blends them together.

Each attention is then used sequentially with each feature map.

The Spatial attention module is modified in Yolov4 to a point wise operation.

Key ideas

  • Summaries of the key ideas

Technical details

  • Summary of technical details

Notes

  • Questions and notes on how to improve/revise the current work