Skip to content

[ECCV 2020] Official Pytorch implementation for "Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification". SOTA results for ZSL and GZSL

License

Notifications You must be signed in to change notification settings

akshitac8/tfvaegan

Repository files navigation

PWC PWC PWC PWC PWC PWC PWC PWC

Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification (ECCV 2020)

(* denotes equal contribution)

Paper: https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123670477.pdf

Video Presentation: Short summary , Overview

Finetuned features: https://drive.google.com/drive/folders/13-eyljOmGwVRUzfMZIf_19HmCj1yShf1?usp=sharing

Webpage: https://akshitac8.github.io/tfvaegan/

Zero-shot learning strives to classify unseen categories for which no data is available during training. In the generalized variant, the test samples can further belong to seen or unseen categories. The stateof-the-art relies on Generative Adversarial Networks that synthesize unseen class features by leveraging class-specific semantic embeddings. During training, they generate semantically consistent features, but discard this constraint during feature synthesis and classification. We propose to enforce semantic consistency at all stages of (generalized) zero-shot learning: training, feature synthesis and classification. We first introduce a feedback loop, from a semantic embedding decoder, that iteratively refines the generated features during both the training and feature synthesis stages. The synthesized features together with their corresponding latent embeddings from the decoder are then transformed into discriminative features and utilized during classification to reduce ambiguities among categories. Experiments on (generalized) zero-shot object and action classification reveal the benefit of semantic consistency and iterative feedback, outperforming existing methods on six zero-shot learning benchmarks

Overall Architecture:



Overall Framework for TF-Vaegan

A feedback module, which utilizes the auxiliary decoder during both training and feature synthesis stages for improving semantic quality of synthesized feature.

A discriminative feature transformation that utilizes the auxiliary decoder during the classification stage for enhancing zero-shot classification.

Prerequisites

  • Python 3.6
  • Pytorch 0.3.1
  • torchvision 0.2.0
  • h5py 2.10
  • scikit-learn 0.22.1
  • scipy=1.4.1
  • numpy 1.18.1
  • numpy-base 1.18.1
  • pillow 5.1.0

Installation

The model is built in PyTorch 0.3.1 and tested on Ubuntu 16.04 environment (Python3.6, CUDA9.0, cuDNN7.5).

For installing, follow these intructions

conda create -n tfvaegan python=3.6
conda activate tfvaegan
pip install https://download.pytorch.org/whl/cu90/torch-0.3.1-cp36-cp36m-linux_x86_64.whl
pip install torchvision==0.2.0 scikit-learn==0.22.1 scipy==1.4.1 h5py==2.10 numpy==1.18.1

Data preparation

Standard ZSL and GZSL datasets

Download CUB, AWA, FLO and SUN features from the drive link shared below.

link: https://drive.google.com/drive/folders/16Xk1eFSWjQTtuQivTogMmvL3P6F_084u?usp=sharing

Download UCF101 and HMDB51 features from the drive link shared below.

link: https://drive.google.com/drive/folders/1pNlnL3LFSkXkJNkTHNYrQ3-Ie4vvewBy?usp=sharing

Extract them in the datasets folder.

Custom datasets

  1. Download the custom dataset images in the datsets folder.
  2. Use a pre-defined RESNET101 as feature extractor. For example, you can a have look here
  3. Extract features from the pre-defined RESNET101 and save the features in the dictionary format with keys 'features', 'image_files', 'labels'.
  4. Save the dictionary in a .mat format using,
    import scipy.io as io
    io.savemat('temp',feat)
    

Training

Zero-Shot Image Classification

  1. To train and evaluate ZSL and GZSL models on CUB, AWA, FLO and SUN, please run:
CUB : python scripts/run_cub_tfvaegan.py
AWA : python scripts/run_awa_tfvaegan.py
FLO : python scripts/run_flo_tfvaegan.py
SUN : python scripts/run_sun_tfvaegan.py

Zero-Shot Action Classification

  1. To train and evaluate ZSL and GZSL models on UCF101, HMDB51, please run:
HMDB51 : python scripts/run_hmdb51_tfvaegan.py
UCF101 : python scripts/run_ucf101_tfvaegan.py

Results

Citation:

If you find this useful, please cite our work as follows:

@inproceedings{narayan2020latent,
	title={Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification},
	author={Narayan, Sanath and Gupta, Akshita and Khan, Fahad Shahbaz and Snoek, Cees GM and Shao, Ling},
	booktitle={ECCV},
	year={2020}
}

About

[ECCV 2020] Official Pytorch implementation for "Latent Embedding Feedback and Discriminative Features for Zero-Shot Classification". SOTA results for ZSL and GZSL

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages