Skip to content

Long-Tailed Visual Recognition via Self-Heterogeneous Integration with Knowledge Excavation (CVPR 2023)

Notifications You must be signed in to change notification settings

jinyan-06/SHIKE

Repository files navigation

SHIKE

Long-Tailed Visual Recognition via Self-Heterogeneous Integration with Knowledge Excavation

Authors: Yan Jin, Mengke Li, Yang Lu*, Yiu-ming Cheung, Hanzi Wang

SHIKE-overall

This is the repository of the CVPR 2023 paper: "Long-Tailed Visual Recognition via Self-Heterogeneous Integration with Knowledge Excavation." We find that deep neural networks have different preferences towards the long-tailed distribution according to the depth. SHIKE is designed as a Mixture of Experts (MoE) method, which fuses features of different depths and enables transfer among experts, boosting the performance effectively in long-tailed visual recognition.

Requirements

python  3.7.7 or above
torch   1.11.0 or above

Reproducibility

Using the requirements file in this repo to create a virtual env. Reset the seed to 0 (line 49 in cifarTrain.py) and you may get the ideal result.

Implementation for all datasets is still under reoganizing...

stay tuned for it~

Acknowledgement

Data augmentation in SHIKE mainly follows BalancedMetaSoftmax and PaCo.

About

Long-Tailed Visual Recognition via Self-Heterogeneous Integration with Knowledge Excavation (CVPR 2023)

Topics

Resources

Stars

Watchers

Forks

Languages