TUHH Open Research
Help
  • Log In
    New user? Click here to register.Have you forgotten your password?
  • English
  • Deutsch
  • Communities & Collections
  • Publications
  • Research Data
  • People
  • Institutions
  • Projects
  • Statistics
  1. Home
  2. TUHH
  3. Publication References
  4. Explainable machine learning: A case study on impedance tube measurements
 
Options

Explainable machine learning: A case study on impedance tube measurements

Publikationstyp
Conference Paper
Date Issued
2021-08
Sprache
English
Author(s)
Stender, Merten  orcid-logo
Wedler, Mathies  orcid-logo
Hoffmann, Norbert  orcid-logo
Adam, Christian  orcid-logo
Institut
Strukturdynamik M-14  
TORE-URI
http://hdl.handle.net/11420/10718
Citation
International Congress and Exposition of Noise Control Engineering (INTER-NOISE 2021)
Contribution to Conference
50th International Congress and Exposition of Noise Control Engineering, INTER-NOISE 2021  
Publisher DOI
10.3397/IN-2021-2342
Scopus ID
2-s2.0-85117391502
Is Part Of
isbn:978-173259865-2
Machine learning techniques allow for finding hidden patterns and signatures in data. Currently, these methods are gaining increased interest in engineering in general and in vibroacoustics in particular. Although ML methods are successfully applied, it is hardly understood how these black-box-type methods make their decisions. Explainable machine learning aims at overcoming this issue by deepen the understanding on the decision making process through perturbation-based model diagnosis. This paper introduces machine learning methods and reviews recent techniques for explainability and interpretability. These methods are exemplified on sound absorption coefficient spectra of one sound absorbing foam material measured in an impedance tube. Variances of the absorption coefficients measurements as a function of the specimen thickness and the operator are modeled by univariate and multivariate machine learning models. In order to identify the driving patterns, i.e., how and in which frequency regime the measurements are affected by the setup specifications, Shapley additive explanations are derived for the ML models. It is demonstrated how explaining machine learning models can be used to discover and express complicated relations in experimental data, thereby paving the way to novel knowledge discovery strategies in evidence-based modeling.
Subjects
MLE@TUHH
TUHH
Weiterführende Links
  • Contact
  • Send Feedback
  • Cookie settings
  • Privacy policy
  • Impress
DSpace Software

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science
Design by effective webwork GmbH

  • Deutsche NationalbibliothekDeutsche Nationalbibliothek
  • ORCiD Member OrganizationORCiD Member Organization
  • DataCiteDataCite
  • Re3DataRe3Data
  • OpenDOAROpenDOAR
  • OpenAireOpenAire
  • BASE Bielefeld Academic Search EngineBASE Bielefeld Academic Search Engine
Feedback