Skip to content

chenshen03/DeepHash-pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DeepHash-Pytoch

PyTorch implementation for DeepHash framework

Prerequisites

Linux or OSX

NVIDIA GPU + CUDA (may CuDNN) and corresponding PyTorch framework (version 1.0.0)

Python 3.6

Datasets

We use ImageNet, NUS-WIDE and COCO dataset in our experiments. You can download the ImageNet dataset and NUS-WIDE dataset here. As for COCO dataset, we use COCO 2014, which can be downloaded here. And in case of COCO changes in the future, we also provide a download link here on google drive. After downloading, you need to move the imagenet.tar.gz to ./data/imagenet and extract the file there.

mv imagenet.tar.gz ./data/imagenet
cd ./data/imagenet
tar -zxvf imagenet.tar.gz

Also, for NUS-WIDE, you need to move the nus_wide.tar.gz to ./data/nuswide_81 and extract the file there.

mv nus_wide.tar.gz ./data/nus_wide
cd ./data/nus_wide
tar -zxvf nus_wide.tar.gz

For COCO dataset, you need to extract both train and val archive for COCO in ./data/coco. If you download from COCO download page,

mv train2014.zip ./data/coco
mv val2014.zip ./data/coco
cd ./data/coco
unzip train2014.zip
unzip val2014.zip

If you use our shared link

mv coco.tar.gz ./data/coco
cd ./data/coco
tar -zxvf coco.tar.gz
unzip train2014.zip
unzip val2014.zip

You can also modify the list file(txt format) in ./data as you like. Each line in the list file follows the following format:

<image path><space><one hot label representation>

Training

First, you can manually download the PyTorch pre-trained model introduced in `torchvision' library or if you have connected to the Internet, you can automatically downloaded them. Then, you can train the model for each dataset using the followling command.

cd src
python train.py --gpu_id 0 --dataset coco --prefix resnet50_hashnet --bit 48 --net ResNet50 --lr 0.0003 --class_num 1.0

You can set the command parameters to switch between different experiments.

  • "gpu_id" is the GPU ID to run experiments.
  • "bit" parameter is the number of bits of the hash codes.
  • "dataset" is the dataset selection. In our experiments, it can be "imagenet", "nus_wide" or "coco".
  • "prefix" is the path to output model snapshot and log file in "snapshot" directory.
  • "net" sets the base network. For details of setting, you can see network.py.
    • For AlexNet, "net" is AlexNet.
    • For VGG Net, "net" is like VGG16. Detail names are in network.py.
    • For ResNet, "net" is like ResNet50. Detail names are in network.py.
  • "lr" is the learning rate.
  • "class_num" is the positive and negative pairs balance weight.

Evaluation

You can evaluate the Mean Average Precision(MAP) result on each dataset using the followling command.

cd src
python test.py --gpu_id 0 --dataset coco --prefix resnet50_hashnet --bit 48 --snapshot iter_10000

You can set the command parameters to switch between different experiments.

  • "gpu_id" is the GPU ID to run experiments.
  • "bit" parameter is the number of bits of the hash codes.
  • "dataset" is the dataset selection. In our experiments, it can be "imagenet", "nus_wide" or "coco".
  • "prefix" is the path to output model snapshot and log file in "snapshot" directory.
  • "snapshot" is the snapshot model name. "iter_09000" means the model snapshoted at iteration 9000.

About

基于PyTorch的DeepHash框架

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages