Options
Akronym
haRTStone
Projekt Titel
haRTStone - Automated Generation of Benchmark Programs for the Evaluation of Analyses and Optimizations for Hard Real-Time Systems
Förderkennzeichen
FA 1017/4-1
Aktenzeichen
945.03-800
Startdatum
November 16, 2017
Enddatum
December 31, 2021
Gepris ID
Loading...
Many embedded systems are safety-critical real-time systems that have to meet hard deadlines (e.g., airbag or flight control systems). When designing such real-time systems, it is of utmost importance to guarantee that all tasks of a system meet their given deadlines. For this purpose, dedicated timing analyses are required that examine the worst-case behavior of a system and are able to provide such guarantees. In the case that deadlines are not met, optimizations need to be applied in order to modify the code of the system such that timing constraints are nevertheless finally met.Research on such analyses and optimizations for hard real-time systems is an extremely lively area where new results are presented regularly and at a very fast pace. Naturally, the evaluation of such analyses and optimizations plays a very important role. Nowadays, evaluation typically relies on benchmarking such that new analyses or optimizations are applied to existing collections of applications, tasks or program codes. The currently used benchmarks are, however, highly limited and not sufficient in order to perform a sound and scientific evaluation, especially if massively parallel multi-task systems are considered.For a well-founded and reproducible evaluation of analyses and optimizations, there is a strong demand for universally applicable benchmark approaches that are freely available for the entire scientific community. Benchmarks should satisfy the needs and requirements of various branches of research (e.g., schedulability analysis, WCET analysis, compiler optimization) on the one hand, but should also, on the other hand, realistically represent different application domains like, e.g., control or signal processing applications.This project aims at the realization of a flexible and parameterizable benchmark generator that produces benchmark programs in an automated, pseudo-randomized and reproducible fashion. This benchmark generator will in particular cover the system and the code level by producing both complete task sets and also actual program codes for the individual tasks. In order to enable a widespread use of the generator and a broad collaboration with arbitrary interested people and groups, this project will be inclusive and the developed software will be openly available right from the beginning. In the end, this project shall lead to a methodology for benchmarking-based evaluation that describes clearly and reproducibly for the different real-time communities, how to use the benchmark generator in order to obtain plausible, sound and scientifically accepted evaluation results.Many embedded systems are safety-critical real-time systems that have to meet hard deadlines (e.g., airbag or flight control systems). When designing such real-time systems, it is of utmost importance to guarantee that all tasks of a system meet their given deadlines. For this purpose, dedicated timing analyses are required that examine the worst-case behavior of a system and are able to provide such guarantees. In the case that deadlines are not met, optimizations need to be applied in order to modify the code of the system such that timing constraints are nevertheless finally met.Research on such analyses and optimizations for hard real-time systems is an extremely lively area where new results are presented regularly and at a very fast pace. Naturally, the evaluation of such analyses and optimizations plays a very important role. Nowadays, evaluation typically relies on benchmarking such that new analyses or optimizations are applied to existing collections of applications, tasks or program codes. The currently used benchmarks are, however, highly limited and not sufficient in order to perform a sound and scientific evaluation, especially if massively parallel multi-task systems are considered.For a well-founded and reproducible evaluation of analyses and optimizations, there is a strong demand for universally applicable benchmark approaches that are freely available for the entire scientific community. Benchmarks should satisfy the needs and requirements of various branches of research (e.g., schedulability analysis, WCET analysis, compiler optimization) on the one hand, but should also, on the other hand, realistically represent different application domains like, e.g., control or signal processing applications.This project aims at the realization of a flexible and parameterizable benchmark generator that produces benchmark programs in an automated, pseudo-randomized and reproducible fashion. This benchmark generator will in particular cover the system and the code level by producing both complete task sets and also actual program codes for the individual tasks. In order to enable a widespread use of the generator and a broad collaboration with arbitrary interested people and groups, this project will be inclusive and the developed software will be openly available right from the beginning. In the end, this project shall lead to a methodology for benchmarking-based evaluation that describes clearly and reproducibly for the different real-time communities, how to use the benchmark generator in order to obtain plausible, sound and scientifically accepted evaluation results.