Skip to content

PiggyJerry/DC-Net

Repository files navigation

DC-Net

This is the official repo for our paper: "DC-Net: Divide-and-Conquer for Salient Object Detection".

Authors: Jiayi Zhu, Xuebin Qin and Abdulmotaleb Elsaddik

Contact: zjyzhujiayi55@gmail.com

Usage

  1. Clone this repo.
git clone https://github.com/PiggyJerry/DC-Net.git
  1. Download the pre-trained model and put the model file to the directory DC-Net/saved_models:
name pretrain backbone resolution #params FPS download
DC-Net-R DUTS-TR ResNet-34 352*352 356.3MB 60 GoogleDrive/Baidu Pan
DC-Net-S DUTS-TR Swin-B 384*384 1495.0MB 29 GoogleDrive/Baidu Pan
DC-Net-R-HR DIS5K ResNet-34 1024*1024 356.3MB 55 GoogleDrive/Baidu Pan
  1. Download checkpoint from GoogleDrive/Baidu Pan and put it to the directory DC-Net/checkpoint.

  2. Unzip apex.zip to the directory 'DC-Net'.

  3. Train the model.

First, download the datasets to the directory DC-Net/datasets, then cd to the directory 'DC-Net', run the train process by command: python main-DC-Net.py. If you want to train DC-Net-S, please change the 362 line of main-DC-Net.py to hypar['type']='S'.

  1. Inference the model.

First, put test images to the directory DC-Net/testImgs, then cd to the directory 'DC-Net', run the inference process by command: python Inference.py. If you want to inference DC-Net-S, please change the 17 line of Inference.py to type='S'.

Predicted saliency maps

For DC-Net-R and DC-Net-S we provide the predicted saliency maps for low-resolution datasets DUTS-TE, DUT-OMRON, HKU-IS, ECSSD and PASCAL-S.

For DC-Net-R-HR we also provide the predicted saliency maps for high-resolution datasets DIS-TE, ThinObject5K, UHRSD, HRSOD and DAVIS-S.

name predicted saliency maps
DC-Net-R GoogleDrive/Baidu Pan
DC-Net-S GoogleDrive/Baidu Pan
DC-Net-R-HR GoogleDrive/Baidu Pan

How to modify the edge width of the edge map?

You just need to modify the 330 line of data_loader_cache.py, where the last hyperparameter $thickness$ of cv2.drawContours means the bilateral edge pixel, after processing by line 332, the bilateral edge pixel becomes inter unilateral edge pixel $edge\ width$, which is what we want. $edge\ width$=($thickness$+1)/2.

How to use Parallel-ResNet and Parallel-Swin-Transformer?

Same as the original ResNet and Swin-Transformer, you just need to modify the new hyperparameter parallel to how many encoders you want.

Citation

@article{zhu2023dc,
  title={DC-Net: Divide-and-Conquer for Salient Object Detection},
  author={Zhu, Jiayi and Qin, Xuebin and Elsaddik, Abdulmotaleb},
  journal={arXiv preprint arXiv:2305.14955},
  year={2023}
}

About

The code for paper: "DC-Net: Divide-and-Conquer for Salient Object Detection"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages