Wala, JensJensWala2024-10-212024-10-212024-09-1835. Forum Bauinformatik, fbi 2024: 66-73https://hdl.handle.net/11420/49590Critical infrastructures such as power grids, transportation networks, and water systems are essential to national economies and societal well-being. Integrating Artificial Intelligence (AI) into these systems could enhance productivity and operational resilience. However, the adoption of AI in critical infrastructures necessitates a focus on explainability to ensure transparency, trust, and regulatory compliance. This paper explores inherently explainable models (IEMs) and post hoc explainable models (PHEMs) within the domain of critical infrastructures. By examining regulatory requirements, analyzing different AI models designed for explainability, and comparing these models, this paper provides a comprehensive overview of strategies for selecting AI systems that enhance transparency and compliance. The findings underscore the importance of choosing appropriate AI models to ensure safe, reliable, and legally accountable AI implementation in critical infrastructure, ultimately supporting societal functions and public safety.enhttps://creativecommons.org/licenses/by/4.0/Critical InfrastructureExplainable Artificial IntelligenceInterpretabilityResilienceTransparencyComputer Science, Information and General Works::006: Special computer methodsSocial Sciences::333: Economics of Land and Energy::333.7: Natural Resources, Energy and EnvironmentTechnology::681: Precision Instruments and Other DevicesAI in Critical Infrastructures – Explainability and ModelsConference Paper10.15480/882.1349810.15480/882.13498Conference Paper