Skip to content

Latest commit

 

History

History

mae

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

MAE

Masked Autoencoders Are Scalable Vision Learners

Abstract

This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels. It is based on two core designs. First, we develop an asymmetric encoder-decoder architecture, with an encoder that operates only on the visible subset of patches (without mask tokens), along with a lightweight decoder that reconstructs the original image from the latent representation and mask tokens. Second, we find that masking a high proportion of the input image, e.g., 75%, yields a nontrivial and meaningful self-supervisory task. Coupling these two designs enables us to train large models efficiently and effectively: we accelerate training (by 3× or more) and improve accuracy. Our scalable approach allows for learning high-capacity models that generalize well: e.g., a vanilla ViT-Huge model achieves the best accuracy (87.8%) among methods that use only ImageNet-1K data. Transfer performance in downstream tasks outperforms supervised pretraining and shows promising scaling behavior.

How to use it?

Predict image

from mmpretrain import inference_model

predict = inference_model('vit-base-p16_mae-300e-pre_8xb128-coslr-100e_in1k', 'demo/bird.JPEG')
print(predict['pred_class'])
print(predict['pred_score'])

Use the model

import torch
from mmpretrain import get_model

model = get_model('mae_vit-base-p16_8xb512-amp-coslr-300e_in1k', pretrained=True)
inputs = torch.rand(1, 3, 224, 224)
out = model(inputs)
print(type(out))
# To extract features.
feats = model.extract_feat(inputs)
print(type(feats))

Train/Test Command

Prepare your dataset according to the docs.

Train:

python tools/train.py configs/mae/mae_vit-base-p16_8xb512-amp-coslr-300e_in1k.py

Test:

python tools/test.py configs/mae/benchmarks/vit-base-p16_8xb128-coslr-100e_in1k.py None

Models and results

Pretrained models

Model Params (M) Flops (G) Config Download
mae_vit-base-p16_8xb512-amp-coslr-300e_in1k 111.91 17.58 config model | log
mae_vit-base-p16_8xb512-amp-coslr-400e_in1k 111.91 17.58 config model | log
mae_vit-base-p16_8xb512-amp-coslr-800e_in1k 111.91 17.58 config model | log
mae_vit-base-p16_8xb512-amp-coslr-1600e_in1k 111.91 17.58 config model | log
mae_vit-large-p16_8xb512-amp-coslr-400e_in1k 329.54 61.60 config model | log
mae_vit-large-p16_8xb512-amp-coslr-800e_in1k 329.54 61.60 config model | log
mae_vit-large-p16_8xb512-amp-coslr-1600e_in1k 329.54 61.60 config model | log
mae_vit-huge-p16_8xb512-amp-coslr-1600e_in1k 657.07 167.40 config model | log

Image Classification on ImageNet-1k

Model Pretrain Params (M) Flops (G) Top-1 (%) Config Download
vit-base-p16_mae-300e-pre_8xb128-coslr-100e_in1k MAE 300-Epochs 86.57 17.58 83.10 config N/A
vit-base-p16_mae-400e-pre_8xb128-coslr-100e_in1k MAE 400-Epochs 86.57 17.58 83.30 config N/A
vit-base-p16_mae-800e-pre_8xb128-coslr-100e_in1k MAE 800-Epochs 86.57 17.58 83.30 config N/A
vit-base-p16_mae-1600e-pre_8xb128-coslr-100e_in1k MAE 1600-Epochs 86.57 17.58 83.50 config model | log
vit-base-p16_mae-300e-pre_8xb2048-linear-coslr-90e_in1k MAE 300-Epochs 86.57 17.58 60.80 config N/A
vit-base-p16_mae-400e-pre_8xb2048-linear-coslr-90e_in1k MAE 400-Epochs 86.57 17.58 62.50 config N/A
vit-base-p16_mae-800e-pre_8xb2048-linear-coslr-90e_in1k MAE 800-Epochs 86.57 17.58 65.10 config N/A
vit-base-p16_mae-1600e-pre_8xb2048-linear-coslr-90e_in1k MAE 1600-Epochs 86.57 17.58 67.10 config N/A
vit-large-p16_mae-400e-pre_8xb128-coslr-50e_in1k MAE 400-Epochs 304.32 61.60 85.20 config N/A
vit-large-p16_mae-800e-pre_8xb128-coslr-50e_in1k MAE 800-Epochs 304.32 61.60 85.40 config N/A
vit-large-p16_mae-1600e-pre_8xb128-coslr-50e_in1k MAE 1600-Epochs 304.32 61.60 85.70 config N/A
vit-large-p16_mae-400e-pre_8xb2048-linear-coslr-90e_in1k MAE 400-Epochs 304.33 61.60 70.70 config N/A
vit-large-p16_mae-800e-pre_8xb2048-linear-coslr-90e_in1k MAE 800-Epochs 304.33 61.60 73.70 config N/A
vit-large-p16_mae-1600e-pre_8xb2048-linear-coslr-90e_in1k MAE 1600-Epochs 304.33 61.60 75.50 config N/A
vit-huge-p14_mae-1600e-pre_8xb128-coslr-50e_in1k MAE 1600-Epochs 632.04 167.40 86.90 config model | log
vit-huge-p14_mae-1600e-pre_32xb8-coslr-50e_in1k-448px MAE 1600-Epochs 633.03 732.13 87.30 config model | log

Citation

@article{He2021MaskedAA,
  title={Masked Autoencoders Are Scalable Vision Learners},
  author={Kaiming He and Xinlei Chen and Saining Xie and Yanghao Li and
  Piotr Doll'ar and Ross B. Girshick},
  journal={arXiv},
  year={2021}
}