2023-06-252023-06-25https://tore.tuhh.de/handle/11420/15826Viele eingebettete Systeme sind sicherheitskritische Echtzeitsysteme, die Daten in fest vorgegebener Zeit verarbeiten müssen (z.B. Airbag- oder Fluglagesteuerungen). Wichtigstes Kriterium beim Entwurf solcher Echtzeitsysteme ist zu garantieren, dass alle Tasks des Systems ihre vorgegebenen Deadlines einhalten. Hierzu bedarf es spezieller Analyseverfahren, die das schlimmstmögliche zeitliche Verhalten eines Systems untersuchen und solche Garantien geben können. Im Fall, dass Deadlines nicht eingehalten werden, müssen Optimierungsverfahren angewendet werden, um den Code des Systems so zu verändern, dass zeitliche Schranken abschließend dennoch eingehalten werden.Die Erforschung solcher Analyse- und Optimierungsverfahren für harte Echtzeitsysteme ist ein überaus lebendiger Bereich, in dem regelmäßig und in kurzen Intervallen neue Ergebnisse präsentiert werden. Naturgemäß spielt die Evaluation solcher Verfahren eine wichtige Rolle. Diese beruht üblicherweise auf Benchmarking, so dass Analyse- und Optimierungsverfahren auf existierende Sammlungen von Anwendungen, Tasks oder Programm-Codes angewendet werden. Die derzeitig verwendeten Benchmarks sind jedoch nicht ausreichend für die Durchführung einer fundierten wissenschaftlichen Evaluation neuartiger Verfahren, insbesondere wenn massiv parallele Multi-Task-Systeme betrachtet werden.Zur fundierten und reproduzierbaren Evaluation von Optimierungs- und Analyseverfahren harter Echtzeitsysteme besteht ein dringender Bedarf an universell einsetzbaren und für die wissenschaftliche Gemeinschaft frei verfügbaren Benchmarkverfahren. Diese sollen einerseits den Anforderungen unterschiedlicher Forschungszweige (z.B. Schedulability-Analyse, WCET-Analyse oder Compiler-Optimierung) genügen, andererseits aber auch möglichst realitätsnah verschiedene Anwendungsbereiche (z.B. Steuerungs-/Regelungsanwendungen oder Signalverarbeitung) repräsentieren.Ziel des Projekts ist es, einen flexiblen und parametrisierbaren Benchmark-Generator zur automatisierten, pseudo-randomisierten und reproduzierbaren Erzeugung von Benchmarks zu entwickeln. Dieser Benchmark-Generator soll insbesondere die System- und die Code-Ebene abdecken, indem sowohl Task-Mengen als auch konkrete Programm-Codes für einzelne Tasks generiert werden. Um eine weite Verbreitung und mögliche Mitarbeit beliebiger interessierter Personengruppen zu ermöglichen, soll das Projekt inklusiv sein, und die hier entstehende Software soll von Beginn an frei zur Verfügung gestellt werden. Letztlich soll dieses Projekt zu einer Methodik zur Benchmark-gestützten Evaluation führen, die für die unterschiedlichen Anwendungsgebiete aus der Echtzeit-Gemeinschaft klar und reproduzierbar beschreibt, wie der entstehende Benchmark-Generator zu verwenden ist, um plausible und wissenschaftlich anerkannte Evaluationsergebnisse zu erzielen.Many embedded systems are safety-critical real-time systems that have to meet hard deadlines (e.g., airbag or flight control systems). When designing such real-time systems, it is of utmost importance to guarantee that all tasks of a system meet their given deadlines. For this purpose, dedicated timing analyses are required that examine the worst-case behavior of a system and are able to provide such guarantees. In the case that deadlines are not met, optimizations need to be applied in order to modify the code of the system such that timing constraints are nevertheless finally met.Research on such analyses and optimizations for hard real-time systems is an extremely lively area where new results are presented regularly and at a very fast pace. Naturally, the evaluation of such analyses and optimizations plays a very important role. Nowadays, evaluation typically relies on benchmarking such that new analyses or optimizations are applied to existing collections of applications, tasks or program codes. The currently used benchmarks are, however, highly limited and not sufficient in order to perform a sound and scientific evaluation, especially if massively parallel multi-task systems are considered.For a well-founded and reproducible evaluation of analyses and optimizations, there is a strong demand for universally applicable benchmark approaches that are freely available for the entire scientific community. Benchmarks should satisfy the needs and requirements of various branches of research (e.g., schedulability analysis, WCET analysis, compiler optimization) on the one hand, but should also, on the other hand, realistically represent different application domains like, e.g., control or signal processing applications.This project aims at the realization of a flexible and parameterizable benchmark generator that produces benchmark programs in an automated, pseudo-randomized and reproducible fashion. This benchmark generator will in particular cover the system and the code level by producing both complete task sets and also actual program codes for the individual tasks. In order to enable a widespread use of the generator and a broad collaboration with arbitrary interested people and groups, this project will be inclusive and the developed software will be openly available right from the beginning. In the end, this project shall lead to a methodology for benchmarking-based evaluation that describes clearly and reproducibly for the different real-time communities, how to use the benchmark generator in order to obtain plausible, sound and scientifically accepted evaluation results.Many embedded systems are safety-critical real-time systems that have to meet hard deadlines (e.g., airbag or flight control systems). When designing such real-time systems, it is of utmost importance to guarantee that all tasks of a system meet their given deadlines. For this purpose, dedicated timing analyses are required that examine the worst-case behavior of a system and are able to provide such guarantees. In the case that deadlines are not met, optimizations need to be applied in order to modify the code of the system such that timing constraints are nevertheless finally met.Research on such analyses and optimizations for hard real-time systems is an extremely lively area where new results are presented regularly and at a very fast pace. Naturally, the evaluation of such analyses and optimizations plays a very important role. Nowadays, evaluation typically relies on benchmarking such that new analyses or optimizations are applied to existing collections of applications, tasks or program codes. The currently used benchmarks are, however, highly limited and not sufficient in order to perform a sound and scientific evaluation, especially if massively parallel multi-task systems are considered.For a well-founded and reproducible evaluation of analyses and optimizations, there is a strong demand for universally applicable benchmark approaches that are freely available for the entire scientific community. Benchmarks should satisfy the needs and requirements of various branches of research (e.g., schedulability analysis, WCET analysis, compiler optimization) on the one hand, but should also, on the other hand, realistically represent different application domains like, e.g., control or signal processing applications.This project aims at the realization of a flexible and parameterizable benchmark generator that produces benchmark programs in an automated, pseudo-randomized and reproducible fashion. This benchmark generator will in particular cover the system and the code level by producing both complete task sets and also actual program codes for the individual tasks. In order to enable a widespread use of the generator and a broad collaboration with arbitrary interested people and groups, this project will be inclusive and the developed software will be openly available right from the beginning. In the end, this project shall lead to a methodology for benchmarking-based evaluation that describes clearly and reproducibly for the different real-time communities, how to use the benchmark generator in order to obtain plausible, sound and scientifically accepted evaluation results.Automatisierte Generierung von Benchmark-Programmen zur Evaluation von Analyse- und Optimierungsverfahren Harter EchtzeitsystemehaRTStone - Automated Generation of Benchmark Programs for the Evaluation of Analyses and Optimizations for Hard Real-Time Systems