Falk, HeikoHeikoFalkGandyra, MaxMaxGandyra2022-03-042022-03-042022-02-12http://hdl.handle.net/11420/11808Many embedded systems are safety-critical real-time systems that have to meet hard deadlines (e.g., airbag or flight control systems). When designing such real-time systems, it is of utmost importance to guarantee that all tasks of a system meet their given deadlines. For this purpose, dedicated timing analyses are required that examine the worst-case behavior of a system and are able to provide such guarantees. In the case that deadlines are not met, optimizations need to be applied in order to modify the code of the system such that timing constraints are nevertheless finally met. Research on such analyses and optimizations for hard real-time systems is an extremely lively area where new results are presented regularly and at a very fast pace. Naturally, the evaluation of such analyses and optimizations plays a very important role. Nowadays, evaluation typically relies on benchmarking such that new analyses or optimizations are applied to existing collections of applications, tasks or program codes. The currently used benchmarks are, however, highly limited and not sufficient in order to perform a sound and scientific evaluation, especially if massively parallel multi-task systems are considered. For a well-founded and reproducible evaluation of analyses and optimizations, there is a strong demand for universally applicable benchmark approaches that are freely available for the entire scientific community. Benchmarks should satisfy the needs and requirements of various branches of research (e.g., schedulability analysis, WCET analysis, compiler optimization) on the one hand, but should also, on the other hand, realistically represent different application domains like, e.g., control or signal processing applications. This project aims at the realization of a flexible and parameterizable benchmark generator that produces benchmark programs in an automated, pseudo-randomized and reproducible fashion. This benchmark generator will in particular cover the system and the code level by producing both complete task sets and also actual program codes for the individual tasks. In order to enable a widespread use of the generator and a broad collaboration with arbitrary interested people and groups, this project will be inclusive and the developed software will be openly available right from the beginning. In the end, this project shall lead to a methodology for benchmarking-based evaluation that describes clearly and reproducibly for the different real-time communities, how to use the benchmark generator in order to obtain plausible, sound and scientifically accepted evaluation results. In order to be able to generate realistic and useful benchmarks, it is necessary to characterize key features of real-life applications and benchmarks, and to classify such applications according to their respective application domains. For this purpose, this archive contains various feature extractors which are implemented as LLVM passes. These extractors translate ANSI-C benchmark programs into LLVM code and extract a number of numerical and structural features out of LLVM. Examples of simple, numerical features include: - number of global variables, - number of defined compound types, - number of function declarations, - number of basic blocks, - number of instructions. Examples of more complex, structural features extracted by these LLVM passes include: - number of neighbors of control flow graph nodes, - number of various instruction types (e.g., integer arithmetics, floating-point arithmetics, memory loads/stores), - average/minimal/maximal loop nesting depths, - average/minimal/maximal basic block sizes, - number of occurrences of data types as function arguments, - number of occurrences of data types as function return values.enhttps://mit-license.org/InformatikhaRTStone - Feature Extractor SoftwareSoftware10.15480/336.421010.15480/336.421010.5281/zenodo.6064245Other