explainable-crack-tip-detection
Explainable ML for fatigue crack tip detection - Implementation
Science Score: 67.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 4 DOI reference(s) in README -
✓Academic publication links
Links to: zenodo.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.9%) to scientific vocabulary
Keywords
Repository
Explainable ML for fatigue crack tip detection - Implementation
Basic Info
Statistics
- Stars: 6
- Watchers: 1
- Forks: 2
- Open Issues: 0
- Releases: 1
Topics
Metadata Files
README.md
Explainable machine learning for precise fatigue crack tip detection
This repository contains the code used to generate the results of the research article
D. Melching, T. Strohmann, G. Requena, E. Breitbarth. (2022)
Explainable machine learning for precise fatigue crack tip detection.
Scientific Reports.
DOI: 10.1038/s41598-022-13275-1
The article is open-access and available here.
Abstract
Data-driven models based on deep learning have led to tremendous breakthroughs in classical computer vision tasks and have recently made their way into natural sciences. However, the absence of domain knowledge in their inherent design significantly hinders the understanding and acceptance of these models. Nevertheless, explainability is crucial to justify the use of deep learning tools in safety-relevant applications such as aircraft component design, service and inspection. In this work, we train convolutional neural networks for crack tip detection in fatigue crack growth experiments using full-field displacement data obtained by digital image correlation. For this, we introduce the novel architecture ParallelNets – a network which combines segmentation and regression of the crack tip coordinates – and compare it with a classical U-Net-based architecture. Aiming for explainability, we use the Grad-CAM interpretability method to visualize the neural attention of several models. Attention heatmaps show that ParallelNets is able to focus on physically relevant areas like the crack tip field, which explains its superior performance in terms of accuracy, robustness, and stability.
Dependencies
All additional, version-specific modules required can be found in requirements.txt
shell
pip install -r requirements.txt
Usage
The code can be used to produce attention heatmaps of trained neural networks following these instructions.
1) Data
In order to run the scripts, nodal displacement data of the fatigue crack propagation experiments S950,1.6 and S160,2.0 as well as the nodemap and ground truth data of S160,4.7 is needed. The data is available on Zenodo under the DOI 10.5281/zenodo.5740216.
The data needs to be downloaded and placed in a folder data.
2) Preparation
Create training and validation data by interpolating the raw nodal displacement data to arrays of size 2x256x256,
where the first channel stands for the x-displacement and the second for the y-displacement.
shell
make_data.py
3) Training, validation, and tests
To train a model with the ParallelNets architecture, run
shell
ParallelNets_train.py
To test a model for its performance, run
shell
ParallelNets_test.py
after training.
4) Explainability and visualization
You can plot the segmentation and crack tip predictions using
shell
ParallelNets_plot.py

and visualize network and layer-wise attention by running
shell
ParallelNets_visualize.py

The explainability method uses a variant of the Grad-CAM algorithm [1].
References
[1] Selvaraju et al. (2020). Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. Int. J. Comput. Vis. 128, 336-359.
Owner
- Name: German Aerospace Center (DLR) - Institute of Materials Research
- Login: dlr-wf
- Kind: organization
- Location: Germany
- Repositories: 1
- Profile: https://github.com/dlr-wf
Citation (CITATION.cff)
cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Melching"
given-names: "David"
orcid: "https://orcid.org/0000-0001-5111-6511"
- family-names: "Strohmann"
given-names: "Tobias"
orcid: "https://orcid.org/0000-0002-9277-1376"
- family-names: "Requena"
given-names: "Guillermo"
orcid: "https://orcid.org/0000-0001-5682-1404"
- family-names: "Breitbarth"
given-names: "Eric"
orcid: "https://orcid.org/0000-0002-3479-9143"
title: "Explainable machine learning for precise fatigue crack tip detection"
version: 1.0.0
doi: 10.5281/zenodo.6602447
date-released: 2022-06-01
url: "https://github.com/melc-da/explainable-crack-tip-detection"
preferred-citation:
type: article
authors:
- family-names: "Melching"
given-names: "David"
orcid: "https://orcid.org/0000-0001-5111-6511"
- family-names: "Strohmann"
given-names: "Tobias"
orcid: "https://orcid.org/0000-0002-9277-1376"
- family-names: "Requena"
given-names: "Guillermo"
orcid: "https://orcid.org/0000-0001-5682-1404"
- family-names: "Breitbarth"
given-names: "Eric"
orcid: "https://orcid.org/0000-0002-3479-9143"
doi: "10.1038/s41598-022-13275-1"
journal: "Scientific Reports"
# month: 9
# start: 1 # First page number
# end: 10 # Last page number
title: "Explainable machine learning for precise fatigue crack tip detection"
# issue: 1
# volume: 1
year: 2022
GitHub Events
Total
- Watch event: 2
- Fork event: 3
Last Year
- Watch event: 2
- Fork event: 3
Dependencies
- Pillow ==8.4.0
- matplotlib ==3.4.3
- numpy ==1.21.3
- opencv-python ==4.5.4.58
- scipy ==1.7.1
- tensorboard ==2.7.0
- torch ==1.7.1
- torchvision ==0.8.2