Options
Explanation in bio-inspired computing: towards understanding of AI systems
Publikationstyp
Conference Paper
Date Issued
2026-02-25
Sprache
English
Citation
1st International Conference on Artificial Intelligence for Computing, Astronomy, and Renewable Energy, AICARE 2025 (2026)
Publisher DOI
Publisher
IEEE
ISBN of container
979-8-3315-5309-8
979-8-3315-5310-4
Artificial intelligence methods and applications have recently seen a massive surge, partially caused by the success of neural networks in areas like image classification and LLMs for generating near-perfect natural-language texts. Unnoticed by the public, but highly important for many AI methods to function, bio-inspired optimisation techniques have also seen a rising usage. However, the more techniques are used, the more complex the explainability decreases. Even developers of Neural Networks can seldom state why the Neural Network’s results are what it is. The explainability of AI methods, as well as systems in general, is, however, essential for safety, security reasons, and to gain and maintain trust with system users. While research in explainability has therefore gained significant traction with prominent AI methods, such as neural networks, bio-inspired optimisation techniques have seen less research in this regard. The complexity of explainability with these algorithms lies in the use of populations and randomness. We present an approach to track individuals in bio-inspired optimisation techniques, aiming to improve our understanding of the quality of results from such optimisation algorithms. To that end, we introduce a data model, include this model in the standard implementations of these approaches, and provide a visualisation that allows for understanding the relational information of these individuals, yielding more insight into these optimisation techniques and providing a first step toward improved explainability.
DDC Class
006.3: Artificial Intelligence