Skip to content

Generative Latent Implicit Conditional Optimization when Learning from Small Sample ICPR 20'

License

Notifications You must be signed in to change notification settings

IdanAzuri/glico-learning-small-sample

Repository files navigation

Python 3.6 PWC PWC PWC PWC

Generative Latent Implicit Conditional Optimization when Learning from Small Sample

[Paper] [Poster] [Talk]

GLICO: Generative Latent Implicit Conditional Optimization when Learning from Small Sample, accepted to ICPR 2020
Idan Azuri, Daphna Weinshall

Left: Examples of synthesized images.Each row shows five new images
(the intermediate columns), generated based on smooth interpolation in the
latent space between two reconstructed images (the left and right columns).

Right: Comparison of Top-1 Accuracy (including STE) for CIFAR-10 using
WideResnet-28, with a different number of training examples per class (labeled
data only). 

If you find this repository useful in your research, please cite the following paper:

@INPROCEEDINGS {9413259,
author = {I. Azuri and D. Weinshall},
booktitle = {2020 25th International Conference on Pattern Recognition (ICPR)},
title = {Generative Latent Implicit Conditional Optimization when Learning from Small Sample},
year = {2021},
volume = {},
issn = {1051-4651},
pages = {8584-8591},
keywords = {training;interpolation;generators;pattern recognition;optimization;image classification},
doi = {10.1109/ICPR48806.2021.9413259},
url = {https://doi.ieeecomputersociety.org/10.1109/ICPR48806.2021.9413259},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
month = {jan}
}

1. Requirements

  • torch>= 1.3.0

  • torchvision>=0.4.2

  • easyargs

dir=path-to-repo/learning-from-small-sample/glico_model
cd $dir

2. Datasets

The following datasets have been used in the paper:

To experiment with differently sized variants of the CUB dataset, download the modified image list files files and unzip the obtained archive into the root directory of your CUB dataset

3. Multiple shots on CUB

UNLABELED=10
SEED=0

for SHOTS in 5 10 20 30; do
echo "glico CUB  samples per class: $SHOTS"
  # train

  s=" train_glico.py --rn my_test --d conv --pixel --z_init rndm --resume --tr --data cub --dim 512 --epoch 202 --fewshot --shot ${SHOTS} --seed ${SEED}"
  python3 $s
  echo $s

  sleep 15

  # eval

  s="evaluation.py -d resnet50 --pretrained --keyword  cub_my_test_10unsuprvised_pixel_classifier_conv_tr_fs_${SHOTS}  --is_inter  --augment --epoch 200 --data cub  --fewshot --shot ${SHOTS} --dim 512 --seed ${SEED}"
  echo $s
  python3 $s
done

4. Multiple shots on CIFAR-100

UNLABELED=10
SEED=0


for SHOTS in 10 25 50 100; do
  echo "glico CIFAR100 samples per classt: $SHOTS"
  # train

  s="train_glico.py --rn  my_test_${UNLABELED}unsuprvised --fewshot --shot $SHOTS --d conv --pixel  --z_init rndm --resume --unlabeled_shot ${UNLABELED} --epoch 202 --noise_proj --tr --seed ${SEED} --dim 512"
  echo $s
  python3 $s

  sleep 15

  # eval

  s="evaluation.py -d wideresnet --keyword cifar-100_my_test_10unsuprvised_pixel_classifier_conv_tr_fs_${SHOTS}_ce_noise_proj --is_inter --augment --epoch 200 --data cifar --pretrained --fewshot --shot $SHOTS --unlabeled_shot ${UNLABELED} --loss_method ce --seed ${SEED} --dim 512"
  echo $s
  python3 $s
done

5. Baseline for CIFAR-100

  • Try the different flags:
    --random_erase
    --cutout
    --autoaugment
    or none of the above fro 'clean' baseline
  • Choose the classifier architecture from the following:
    --d widerenset
    --d resnet50
    --d resnet (resnet110)
    --d vgg (vgg19)
SHOTS=50
UNLABEL=1
SEED=0
echo " Baseline CIFAR random_erase shot: $SHOTS"
s=" baseline_classification.py --epoch 200 -d wideresnet --augment --data cifar  --fewshot --shot  $SHOTS --unlabeled_shot 10 --seed ${SEED}"
echo $s
python3 $s
echo " Baseline CIFAR random_erase shot: $SHOTS"

About

Generative Latent Implicit Conditional Optimization when Learning from Small Sample ICPR 20'

Topics

Resources

License

Stars

Watchers

Forks

Languages