TUHH Open Research
Help
  • Log In
    New user? Click here to register.Have you forgotten your password?
  • English
  • Deutsch
  • Communities & Collections
  • Publications
  • Research Data
  • People
  • Institutions
  • Projects
  • Statistics
  1. Home
  2. TUHH
  3. Publication References
  4. LLMSecEval: a dataset of natural language prompts for security evaluations
 
Options

LLMSecEval: a dataset of natural language prompts for security evaluations

Publikationstyp
Conference Paper
Date Issued
2023
Sprache
English
Author(s)
Tony, Catherine  orcid-logo
Software Security E-22  
Mutas, Markus  
Ferreyra, Nicolas E. Diaz  orcid-logo
Software Security E-22  
Scandariato, Riccardo  
Software Security E-22  
TORE-URI
https://hdl.handle.net/11420/42701
Citation
20th IEEE/ACM International Conference on Mining Software Repositories (MSR 2023)
Contribution to Conference
20th IEEE/ACM International Conference on Mining Software Repositories, MSR 2023  
Publisher DOI
10.1109/MSR59073.2023.00084
Scopus ID
2-s2.0-85166355568
Publisher
IEEE
ISBN
979-8350-31184-6
Large Language Models (LLMs) like Codex are powerful tools for performing code completion and code generation tasks as they are trained on billions of lines of code from publicly available sources. Moreover, these models are capable of generating code snippets from Natural Language (NL) descriptions by learning languages and programming practices from public GitHub repositories. Although LLMs promise an effortless NL-driven deployment of software applications, the security of the code they generate has not been extensively investigated nor documented. In this work, we present LLMSecEval, a dataset containing 150 NL prompts that can be leveraged for assessing the security performance of such models. Such prompts are NL descriptions of code snippets prone to various security vulnerabilities listed in MITRE's Top 25 Common Weakness Enumeration (CWE) ranking. Each prompt in our dataset comes with a secure implementation example to facilitate comparative evaluations against code produced by LLMs. As a practical application, we show how LLMSecEval can be used for evaluating the security of snippets automatically generated from NL descriptions.
Subjects
code security
CWE
LLMs
NL prompts
DDC Class
004: Computer Sciences
TUHH
Weiterführende Links
  • Contact
  • Send Feedback
  • Cookie settings
  • Privacy policy
  • Impress
DSpace Software

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science
Design by effective webwork GmbH

  • Deutsche NationalbibliothekDeutsche Nationalbibliothek
  • ORCiD Member OrganizationORCiD Member Organization
  • DataCiteDataCite
  • Re3DataRe3Data
  • OpenDOAROpenDOAR
  • OpenAireOpenAire
  • BASE Bielefeld Academic Search EngineBASE Bielefeld Academic Search Engine
Feedback