Options
Evaluating the performance of large language models for design validation
Publikationstyp
Conference Paper
Date Issued
2024-09
Sprache
English
Citation
2024 IEEE 37th International System-on-Chip Conference (SOCC)
Contribution to Conference
Publisher DOI
Scopus ID
Publisher
IEEE
ISBN
979-8-3503-7756-9
979-8-3503-7757-6
With the emergence of Large Language Models (LLMs), there has been a growing interest in harnessing their potential applications beyond traditional natural language processing tasks. One such application is hardware design validation. This paper presents a comprehensive evaluation of LLMs in design validation tasks. In design validation, it is essential to analyze the designs and crafting appropriate testbenches for them. We evaluate the ability to recognize hardware descriptions as well as the ability to generate testbenches for those designs. We present evaluation methodology and benchmarks to evaluate these tasks. Experiments were conducted with four prominent LLMs and designs ranging from small arithmetic block up to a small MIPS CPU. The results demonstrate promising performance for a limited complexity threshold.
Subjects
design | large language model | validation
DDC Class
600: Technology