Implementation of "Hierarchical Novelty Detection for Traffic Sign Recognition".
If you find our work useful in your research, please cite:
@article{ruiz2022hierarchical,
title={Hierarchical Novelty Detection for Traffic Sign Recognition},
author={Ruiz, Idoia and Serrat, Joan},
journal={Sensors},
volume={22},
number={12},
article-number={4389},
year={2022},
url = {https://www.mdpi.com/1424-8220/22/12/4389},
issn = {1424-8220},
doi = {10.3390/s22124389}
}
matplotlib==3.3.2
numpy==1.19.1
Pillow==9.2.0
scikit_learn==1.1.1
torch==1.6.0+cu101
torchvision==0.7.0+cu101
To install these dependencies using pip:
pip install -r requirements.txt
When training HCL from pre-computed features, features are expected to be found under the path:
features/{dataset}/resnet101_{split}.h5
Download precomputed features:
- TT100K: finetuned features
- MTSD: finetuned features
- AWA2: ImageNet pre-trained features
- CUB: ImageNet pre-trained features
The splits information is expected to be under taxonomy/{dataset}
in the following files:
taxonomy.txt
Taxonomy information. Parent-children relationships are indicated by indentation.novel.txt
List of novel leaf classesknown.txt
List of known leaf classessplits_data/filenames_{split}.npy.
NumPy file that contains the dict of paths to samples for the split.
Download splits files for TT100K, MTSD, AWA2 or CUB.
Hyperparameters are included in a configuration file with format config_example.ini. Configuration files to reproduce the results reported in the paper are provided below.
Below you can find the links for the model weights and configuration files that reproduce the following results:
Dataset | AUC | Novel acc @50% | @70% | @80% | Novel d_h @50% | @70% | @80% |
---|---|---|---|---|---|---|---|
TT100K | 84.4 | 87.6 | 84.4 | 81.4 | 0.14 | 0.18 | 0.21 |
MTSD | 45.7 | 50.0 | 43.2 | 37.8 | 0.74 | 0.85 | 0.96 |
AWA2 | 33.6 | 37.5 | 33.5 | 29.4 | 1.83 | 1.99 | 2.14 |
CUB | 27.5 | 36.0 | 13.3 | - | 1.35 | 1.89 | - |
For TT100K:
Download HCL weights and configuration files
For MTSD:
Download HCL weights and configuration files
For AWA2:
Download HCL weights and configuration files
For CUB:
Download HCL weights and configuration files
To run the training script:
python -m torch.distributed.launch --nproc_per_node=<num of GPUs> launch_train_from_precomputed_feat.py --exp_dir <path/to/exp/dir/> --config <path/to/config_file.ini>
Example:
CUDA_VISIBLE_DEVICES=0 python -m torch.distributed.launch --nproc_per_node=1 launch_train_from_precomputed_feat.py --exp_dir ./output/train_experiment1/ --config ./output/train_experiment1/config.ini
To resume training from a checkpoint:
CUDA_VISIBLE_DEVICES=6 python -m torch.distributed.launch --nproc_per_node=1 launch_train_from_precomputed_feat.py --exp_dir ./output/train_experiment1/ --config ./output/train_experiment1/config.ini --checkpoint ./output/train_experiment1/checkpoint_loss.pth.tar
python -m torch.distributed.launch --nproc_per_node=<num of GPUs> launch_train_full_model.py --exp_dir <path/to/exp/dir/> --config <path/to/config_file.ini>
Example with 4 GPUs (devices 1,2,3,4):
CUDA_VISIBLE_DEVICES=1,2,3,4 python -m torch.distributed.launch --nproc_per_node=4 launch_train_full_model.py --exp_dir ./output/train_experiment2/ --config ./output/train_experiment2/config.ini
To resume training from a checkpoint:
CUDA_VISIBLE_DEVICES=1,2,3,4 python -m torch.distributed.launch --nproc_per_node=4 launch_train_full_model.py --exp_dir ./output/train_experiment2/ --config ./output/train_experiment2/config.ini --checkpoint_dir ./output/train_experiment2/
To run only evaluation for a saved checkpoint:
CUDA_VISIBLE_DEVICES=6 python -m torch.distributed.launch --nproc_per_node=1 launch_train_from_precomputed_feat.py --exp_dir ./output/train_experiment1/ --config ./output/train_experiment1/config.ini --checkpoint ./output/train_experiment1/checkpoint_loss.pth.tar --only_eval
CUDA_VISIBLE_DEVICES=6 python -m torch.distributed.launch --nproc_per_node=1 launch_train_full_model.py --exp_dir ./output/train_experiment1/ --config ./output/train_experiment1/config.ini --checkpoint_dir ./output/train_experiment1 --only_eval
View at full size and navigate through the hierarchy here
View at full size and navigate through the hierarchy here
View at full size and navigate through the hierarchy here
View at full size and navigate through the hierarchy here