Skip to content

dlr-wf/explainable-crack-tip-detection

Repository files navigation

Explainable machine learning for precise fatigue crack tip detection

DOI

This repository contains the code used to generate the results of the research article

D. Melching, T. Strohmann, G. Requena, E. Breitbarth. (2022)
Explainable machine learning for precise fatigue crack tip detection. 
Scientific Reports.
DOI: 10.1038/s41598-022-13275-1

The article is open-access and available here.

Abstract

Data-driven models based on deep learning have led to tremendous breakthroughs in classical computer vision tasks and have recently made their way into natural sciences. However, the absence of domain knowledge in their inherent design significantly hinders the understanding and acceptance of these models. Nevertheless, explainability is crucial to justify the use of deep learning tools in safety-relevant applications such as aircraft component design, service and inspection. In this work, we train convolutional neural networks for crack tip detection in fatigue crack growth experiments using full-field displacement data obtained by digital image correlation. For this, we introduce the novel architecture ParallelNets – a network which combines segmentation and regression of the crack tip coordinates – and compare it with a classical U-Net-based architecture. Aiming for explainability, we use the Grad-CAM interpretability method to visualize the neural attention of several models. Attention heatmaps show that ParallelNets is able to focus on physically relevant areas like the crack tip field, which explains its superior performance in terms of accuracy, robustness, and stability.

Dependencies

All additional, version-specific modules required can be found in requirements.txt

pip install -r requirements.txt

Usage

The code can be used to produce attention heatmaps of trained neural networks following these instructions.

1) Data

In order to run the scripts, nodal displacement data of the fatigue crack propagation experiments S950,1.6 and S160,2.0 as well as the nodemap and ground truth data of S160,4.7 is needed. The data is available on Zenodo under the DOI 10.5281/zenodo.5740216.

The data needs to be downloaded and placed in a folder data.

2) Preparation

Create training and validation data by interpolating the raw nodal displacement data to arrays of size 2x256x256, where the first channel stands for the x-displacement and the second for the y-displacement.

make_data.py

3) Training, validation, and tests

To train a model with the ParallelNets architecture, run

ParallelNets_train.py

To test a model for its performance, run

ParallelNets_test.py

after training.

4) Explainability and visualization

You can plot the segmentation and crack tip predictions using

ParallelNets_plot.py

Prediction plot

and visualize network and layer-wise attention by running

ParallelNets_visualize.py

Network attention plot Layer-wise attention plot

The explainability method uses a variant of the Grad-CAM algorithm [1].

References

[1] Selvaraju et al. (2020). Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. Int. J. Comput. Vis. 128, 336-359.