DC Field | Value | Language |
---|---|---|
dc.contributor.author | Luckey, Daniel | - |
dc.contributor.author | Fritz, Henrieke | - |
dc.contributor.author | Legatiuk, Dmitrii | - |
dc.contributor.author | Peralta Abadia, Jose | - |
dc.contributor.author | Walther, Christian | - |
dc.contributor.author | Smarsly, Kay | - |
dc.date.accessioned | 2021-11-08T09:21:36Z | - |
dc.date.available | 2021-11-08T09:21:36Z | - |
dc.date.issued | 2022 | - |
dc.identifier.citation | Structural Integrity 21: 331-346 (2022) | de_DE |
dc.identifier.issn | 2522-560X | de_DE |
dc.identifier.uri | http://hdl.handle.net/11420/10806 | - |
dc.description.abstract | In recent years, structural health monitoring (SHM) applications have significantly been enhanced, driven by advancements in artificial intelligence (AI) and machine learning (ML), a subcategory of AI. Although ML algorithms allow detecting patterns and features in sensor data that would otherwise remain undetected, the generally opaque inner processes and black-box character of ML algorithms are limiting the application of ML to SHM. Incomprehensible decision-making processes often result in doubts and mistrust in ML algorithms, expressed by engineers and stakeholders. In an attempt to increase trust in ML algorithms, explainable artificial intelligence (XAI) aims to provide explanations of decisions made by black-box ML algorithms. However, there is a lack of XAI approaches that meet all requirements of SHM applications. This chapter provides a review of ML and XAI approaches relevant to SHM and proposes a conceptual XAI framework pertinent to SHM applications. First, ML algorithms relevant to SHM are categorized. Next, XAI approaches, such as transparent models and model-specific explanations, are presented and categorized to identify XAI approaches appropriate for being implemented in SHM applications. Finally, based on the categorization of ML algorithms and the presentation of XAI approaches, the conceptual XAI framework is introduced. It is expected that the proposed conceptual XAI framework will provide a basis for improving ML acceptance and transparency and therefore increase trust in ML algorithms implemented in SHM applications. | en |
dc.language.iso | en | de_DE |
dc.relation.ispartof | Structural integrity | de_DE |
dc.subject | Artificial intelligence (AI) | de_DE |
dc.subject | Explainable artificial intelligence (XAI) | de_DE |
dc.subject | Machine learning (ML) | de_DE |
dc.subject | Structural health monitoring (SHM) | de_DE |
dc.title | Explainable Artificial Intelligence to Advance Structural Health Monitoring | de_DE |
dc.type | Article | de_DE |
dc.type.dini | article | - |
dcterms.DCMIType | Text | - |
tuhh.abstract.english | In recent years, structural health monitoring (SHM) applications have significantly been enhanced, driven by advancements in artificial intelligence (AI) and machine learning (ML), a subcategory of AI. Although ML algorithms allow detecting patterns and features in sensor data that would otherwise remain undetected, the generally opaque inner processes and black-box character of ML algorithms are limiting the application of ML to SHM. Incomprehensible decision-making processes often result in doubts and mistrust in ML algorithms, expressed by engineers and stakeholders. In an attempt to increase trust in ML algorithms, explainable artificial intelligence (XAI) aims to provide explanations of decisions made by black-box ML algorithms. However, there is a lack of XAI approaches that meet all requirements of SHM applications. This chapter provides a review of ML and XAI approaches relevant to SHM and proposes a conceptual XAI framework pertinent to SHM applications. First, ML algorithms relevant to SHM are categorized. Next, XAI approaches, such as transparent models and model-specific explanations, are presented and categorized to identify XAI approaches appropriate for being implemented in SHM applications. Finally, based on the categorization of ML algorithms and the presentation of XAI approaches, the conceptual XAI framework is introduced. It is expected that the proposed conceptual XAI framework will provide a basis for improving ML acceptance and transparency and therefore increase trust in ML algorithms implemented in SHM applications. | de_DE |
tuhh.publisher.doi | 10.1007/978-3-030-81716-9_16 | - |
tuhh.publication.institute | Digitales und autonomes Bauen B-1 | de_DE |
tuhh.type.opus | (wissenschaftlicher) Artikel | - |
dc.type.driver | article | - |
dc.type.casrai | Journal Article | - |
tuhh.container.volume | 21 | de_DE |
tuhh.container.startpage | 331 | de_DE |
tuhh.container.endpage | 346 | de_DE |
dc.relation.project | BIM-basierte Informationsmodellierung zur semantischen Abbildung intelligenter Bauwerksmonitoringsysteme | de_DE |
dc.relation.project | Datengestützte Analysemodelle für schlanke Bauwerke unter Nutzung von Explainable Artificial Intelligence | de_DE |
dc.relation.project | Fehlertolerantes, drahtloses Bauwerksmonitoring basierend auf Frameanalyse und Deep Learning | de_DE |
dc.relation.project | Semi-probabilistische, sensorbasierte Bemessungs- und Entwurfskonzepte für intelligente Bauwerke | de_DE |
dc.identifier.scopus | 2-s2.0-85117883229 | de_DE |
datacite.resourceType | Article | - |
datacite.resourceTypeGeneral | JournalArticle | - |
item.mappedtype | Article | - |
item.openairetype | Article | - |
item.languageiso639-1 | en | - |
item.grantfulltext | none | - |
item.cerifentitytype | Publications | - |
item.creatorOrcid | Luckey, Daniel | - |
item.creatorOrcid | Fritz, Henrieke | - |
item.creatorOrcid | Legatiuk, Dmitrii | - |
item.creatorOrcid | Peralta Abadia, Jose | - |
item.creatorOrcid | Walther, Christian | - |
item.creatorOrcid | Smarsly, Kay | - |
item.creatorGND | Luckey, Daniel | - |
item.creatorGND | Fritz, Henrieke | - |
item.creatorGND | Legatiuk, Dmitrii | - |
item.creatorGND | Peralta Abadia, Jose | - |
item.creatorGND | Walther, Christian | - |
item.creatorGND | Smarsly, Kay | - |
item.fulltext | No Fulltext | - |
item.openairecristype | http://purl.org/coar/resource_type/c_6501 | - |
crisitem.project.funder | Deutsche Forschungsgemeinschaft (DFG) | - |
crisitem.project.funder | Deutsche Forschungsgemeinschaft (DFG) | - |
crisitem.project.funder | Deutsche Forschungsgemeinschaft (DFG) | - |
crisitem.project.funder | Deutsche Forschungsgemeinschaft (DFG) | - |
crisitem.project.funderid | 501100001659 | - |
crisitem.project.funderid | 501100001659 | - |
crisitem.project.funderid | 501100001659 | - |
crisitem.project.funderid | 501100001659 | - |
crisitem.project.funderrorid | 018mejw64 | - |
crisitem.project.funderrorid | 018mejw64 | - |
crisitem.project.funderrorid | 018mejw64 | - |
crisitem.project.funderrorid | 018mejw64 | - |
crisitem.project.grantno | SM 281/12-1 | - |
crisitem.project.grantno | SM 281/14-1 | - |
crisitem.project.grantno | SM 281/15-1 | - |
crisitem.project.grantno | SM 281/9-1 | - |
crisitem.author.dept | Digitales und autonomes Bauen B-1 | - |
crisitem.author.dept | Digitales und autonomes Bauen B-1 | - |
crisitem.author.orcid | 0000-0002-0028-5793 | - |
crisitem.author.orcid | 0000-0003-0261-6792 | - |
crisitem.author.orcid | 0000-0001-7228-3503 | - |
crisitem.author.parentorg | Studiendekanat Bauwesen (B) | - |
crisitem.author.parentorg | Studiendekanat Bauwesen (B) | - |
Appears in Collections: | Publications without fulltext |
Page view(s)
79
Last Week
1
1
Last month
25
25
checked on Feb 3, 2023
SCOPUSTM
Citations
1
Last Week
0
0
Last month
checked on Jun 30, 2022
Google ScholarTM
Check
Add Files to Item
Note about this record
Cite this record
Export
Items in TORE are protected by copyright, with all rights reserved, unless otherwise indicated.