An explainable artificial intelligence approach for damage detection in structural health monitoring
Artificial intelligence (AI) has been used in recent years as a novel approach towards damage detection in modern structural health monitoring (SHM) systems. Nevertheless, the so-called “black-box nature” of several AI algorithms has limited the trust of practitioners in using AI for real-world SHM applications. This study proposes an explainable artificial intelligence (XAI) approach for SHM systems, providing the tools required to overcome the lack of trust of practitioners in AI algorithms. A one-class support vector machine for outlier detection is used to identify damage in structural response data. Targeting the need for trust in AI in real-world SHM applications, the Shapley additive explanations (SHAP) XAI method is used to explain features in structural response data indicative of damage. For validation, structural response data from simulations of a pedestrian bridge are used, in which damage may or may not be present. As a result of this study, damage detection is achieved with the one-class SVM algorithm, and explanations of the reasoning behind damage detection in structural response data is demonstrated to be possible with the XAI-SHM approach. It is expected that the approach proposed in this study will serve as a basis for the application of XAI in real-world SHM applications.