Skip to content
/ CSSR Public

Crack Segmentation for Low-Resolution Images using Joint Learning with Super-Resolution (CSSR) was accepted to international conference on MVA2021 (oral), and selected for the Best Practical Paper Award.

License

Notifications You must be signed in to change notification settings

Yuki-11/CSSR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

70 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Crack Segmentation for Low-Resolution Images using Joint Learning with Super-Resolution (CSSR)

🚀 CSBSR [IEEE TIM'23], an advanced version of CSSR, has been released! Click here for details! 🚀

PWC PWC

Our Framework

News

  • May 1, 2024 -> CSBSR[Y. Kondo and N. Ukita IEEE TIM'23], an advanced version of CSSR, has been released! Click here for details!
  • July 27, 2021 -> We received the Best Practical Paper Award 🏆 at MVA 2021!

What's this?

We have proposed a method for high-resolution crack segmentation for low-resolution images. This approach enables automatic detection of cracks even when the image resolution of the crack area is reduced due to an environment in which the area where cracks may occur must be photographed from a distance (e.g., An environment in which a drone that targets a high-altitude chimney wall must take a distance in order to fly safely.). The proposed method is composed of the following two approaches.

  1. Deep learning based super resolution to increase the resolution of low-resolution images. This super-resolution image enables delicate crack segmentation. In addition, we proposed CSSR (Crack Segmentation with Super Resolution) using end-to-end joint learning to optimize the super-resolution for the crack segmentation process.

  2. In order to optimize the deep learning model for segmentation, we proposed a loss function Boundary Combo loss that simultaneously optimizes the global and local structures of cracks. This loss function enables both detection of thin and difficult-to-detect cracks and detection of fine crack boundaries.

The experimental results show that the proposed method is superior to the conventional method, and quantitatively*1 and qualitatively, the segmentation is as precise as when using high-resolution images.

*1; In terms of IoU, the proposed method achieves 97.3% of the IoU of the high-resolution image input.

Dependencies

  • Python >= 3.6
  • PyTorch >= 1.8
  • numpy >= 1.19

Usage

  1. Clone the repository:

    git clone https://github.com/Yuki-11/CSSR.git
  2. Download khanhha dataset:

    cd $CSSR_ROOT
    mkdir datasets
    cd datasets
    curl -sc /tmp/cookie "https://drive.google.com/uc?export=download&id=1xrOqv0-3uMHjZyEUrerOYiYXW_E8SUMP" > /dev/null
    CODE="$(awk '/_warning_/ {print $NF}' /tmp/cookie)"  
    curl -Lb /tmp/cookie "https://drive.google.com/uc?export=download&confirm=${CODE}&id=1xrOqv0-3uMHjZyEUrerOYiYXW_E8SUMP" -o temp_dataset.zip
    unzip temp_dataset.zip
    rm temp_dataset.zip
  3. Download trained models:

    cd $CSSR_ROOT
    mkdir output

    You can download trained models here. Then, place the unzipped directory of the models you want to use under <$CSSR_ROOT/output/>.

  4. Install packages:

    cd $CSSR_ROOT
    pip install -r requirement.txt
  5. Training:

    cd $CSSR_ROOT
    python train.py --config_file <CONFIG FILE>

    If you want to resume learning, you can do it with the following command.

    cd $CSSR_ROOT
    python train.py --config_file output/<OUTPUT DIRECTORY (OUTPUT_DIR at config.yaml)>/config.yaml --resume_iter <Saved iteration number>
  6. Test:

    cd $CSSR_ROOT
    python test.py output/<OUTPUT DIRECTORY (OUTPUT_DIR at config.yaml)> <iteration number> 

Citations

If you find this work useful, please consider citing it.

@inproceedings{CSSR2021,
  title={Crack Segmentation for Low-Resolution Images using Joint Learning with Super-Resolution},
  author={Kondo, Yuki and Ukita, Norimichi},
  booktitle={International Conference on Machine Vision Applications (MVA)},
  year={2021}
}

About

Crack Segmentation for Low-Resolution Images using Joint Learning with Super-Resolution (CSSR) was accepted to international conference on MVA2021 (oral), and selected for the Best Practical Paper Award.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages