Skip to content

ENSTA-U2IS-AI/DeepLabV3Plus-MUAD-Pytorch

Repository files navigation

DeepLabv3Plus-Pytorch

DeepLabV3 and DeepLabV3+ with MobileNetv2 and ResNet backbones for Pytorch.

Use [TorchUncertainty] to download and use MUAD easily.

Results

Model mIoU Mean Acc Checkpoint
DeepLabV3Plus-ResNet101 82.6644 86.8975 [GoogleDrive] [HuggingFace]

Download and use MUAD on a headless server with TorchUncertainty

You will find a torchvision dataset for the training and validation set at [TorchUncertainty].

Quick Start

1. Requirements

pip install -r requirements.txt

2. Training on MUAD

CUDA_VISIBLE_DEVICES=0,1 python3 main.py \
--data_root "/path_to_muad_dataset/" \
--odgt_root "./datasets/data_odgt" \
--model "deeplabv3plus_resnet101" \
--output_stride 8 --batch_size 12 --crop_size 768 --gpu_id 0,1 --lr 0.1 --val_batch_size 2

3. Evaluate on MUAD validation set

Results will be saved at ./results if set --save_val_results

python evaluate_miou.py --data_root "/path_to_muad_dataset/" \
--odgt_root ./datasets/data_odgt/ \
--ckptpath ./checkpoints/best_deeplabv3plus_resnet101_muad_os8.pth \
--dataset muad --model deeplabv3plus_resnet101 --output_stride 8

3. Inference (generate outputs for UNCV MUAD challenge)

Here is an example for you to submit your results to UNCV MUAD challenge on Codalab.

python challenge_example.py --data_root  "/path_to_challenge_test_leftImg8bit_folder/" \
--ckptpath ./checkpoints/best_deeplabv3plus_resnet101_muad_os8.pth \
--dataset muad --model deeplabv3plus_resnet101 --output_stride 8
cd ./submission/ && zip ../submission.zip * && cd ..

Then you can submit submission.zip to the corresponding place on the challenge page.

Reference

[1] MUAD: Multiple Uncertainties for Autonomous Driving, a benchmark for multiple uncertainty types and tasks

[2] Rethinking Atrous Convolution for Semantic Image Segmentation

[3] Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation