Skip to content

Nazanin1369/Keras-Transfer-Learning

Repository files navigation

Transfer Learning Lab with VGG, Inception and ResNet

I used Keras to explore feature extraction with the VGG, Inception and ResNet architectures. The models I used were trained for days or weeks on the ImageNet dataset. Thus, the weights encapsulate higher-level features learned from training on thousands of classes.

I used two datasets in this lab:

  1. German Traffic Sign Dataset
  2. Cifar10

Unless you have a powerful GPU, running feature extraction on these models will take a significant amount of time. To make things we precomputed bottleneck features for each (network, dataset) pair, this will allow you experiment with feature extraction even on a modest CPU. You can think of bottleneck features as feature extraction but with caching. Because the base network weights are frozen during feature extraction, the output for an image will always be the same. Thus, once the image has already been passed once through the network we can cache and reuse the output.

The files are encoded as such:

  • {network}_{dataset}_bottleneck_features_train.p
  • {network}_{dataset}_bottleneck_features_validation.p

network can be one of 'vgg', 'inception', or 'resnet'

dataset can be on of 'cifar10' or 'traffic'

Sample Command:

python feature_extraction.py --training_file bottlenecks/vgg_traffic_100_bottleneck_features_train.p --validation_file bottlenecks/vgg_traffic_bottleneck_features_validation.p

About

VGG, RasNet and Inception architectures transfer learning

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published