Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

facebookresearch/flip

Repository files navigation

Scaling Language-Image Pre-training via Masking

This repository contains the official JAX implementation of FLIP, as described in the paper Scaling Language-Image Pre-training via Masking.

@inproceedings{li2022scaling,
  title={Scaling Language-Image Pre-training via Masking},
  author={Li, Yanghao and Fan, Haoqi and Hu, Ronghang and Feichtenhofer, Christoph and He, Kaiming},
  booktitle={CVPR},
  year={2023}
}
  • The implementation is based on JAX and the models are trained on TPUs.
  • FLIP models are trained on LAION datasets including LAION-400M and LAION-2B.
  • Other links
    • For PyTorch and GPU implementation, OpenCLIP has incorporated FLIP into their repo and trained a ViT-G/14 FLIP model with 80.1% ImageNet zero-shot accuray (79.4% before model soups). See their blog for more information.

Results and Pre-trained FLIP models

The following table provides zero-shot results on ImageNet-1K and links to pre-trained weights for the LAION datasets:

data sampled zero-shot IN-1K model
ViT-B/16 LAION-400M 12.8B 68.0 -
ViT-L/16 LAION-400M 12.8B 74.3 -
ViT-H/14 LAION-400M 12.8B 75.5 -
ViT-L/16 LAION-2B 25.6B 76.6 download†
ViT-H/14 LAION-2B 25.6B 78.8 download†

† The released ViT-L/16 and ViT-H/14 models were trained on LAION datasets where faces were blurred as a legal requirement, resulting in a slight performance drop by 0.2-0.3% to achieve accuracies of 76.4% and 78.5%, respectively.

Installation and data preperation

Please check INSTALL.md for installation instructions and data prepraration.

Training

Our FLIP models are trained on Google Cloud TPU To set up Google Cloud TPU, please refer to the their docs for single VM setup and pod slice setup.

By default, we train ViT-B/L models using v3-256 TPUs and ViT-H models with v3-512 TPUs.

1. Pretraining FLIP models via masking

Running locally
export TFDS_DATA_DIR=gs://$GCS_TFDS_BUCKET/datasets
python3 main.py \
    --workdir=${workdir} \
    --config=$1 \
    --config.batch_size=256 \
    --config.laion_path=LAION_PATH \
Running on cloud
gcloud alpha compute tpus tpu-vm ssh $VM_NAME --zone $ZONE \
--worker=all --command "
export TFDS_DATA_DIR=gs://$GCS_TFDS_BUCKET/datasets &&
python3 main.py --workdir=$WORKDIR --config=configs/cfg_flip_large.py --config.laion_path=LAION_PATH 

2. Unmasked tuning

For unmasked tuning, we use the same configs except the following parameters:

python3 main.py --workdir=$WORKDIR --config=configs/cfg_flip_large.py \
--config.laion_path=LAION_PATH \
--config.model.model_img.mask_ratio=0.0 --config.learning_rate=4e-8
--config.num_epochs=100 --config.warmup_epochs=20 \
 --config.pretrain_dir=${PRETRAIN} \

To avoid out of memory issue, we may need to optionally turn on activation checkpointing by config.model.model_img.transformer.remat_policy=actcp and reduce batch size config.batch_size.

Evaluation

To evaluation the pre-trained models for zero-shot ImageNet-1K:

export TFDS_DATA_DIR=gs://$GCS_TFDS_BUCKET/datasets
python3 main.py \
    --workdir=${workdir} \
    --config=configs/cfg_flip_large.py \
    --config.pretrain_dir=$PRETRAIN_MODEL_PATH \
    --config.eval_only=True \

Acknowledgement

  • This repo is built based on flax and t5x.

License

This project is under the CC-BY-NC 4.0 license. See LICENSE for details.

About

Official Open Source code for "Scaling Language-Image Pre-training via Masking"

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages