TUHH Open Research
Help
  • Log In
    New user? Click here to register.Have you forgotten your password?
  • English
  • Deutsch
  • Communities & Collections
  • Publications
  • Research Data
  • People
  • Institutions
  • Projects
  • Statistics
  1. Home
  2. TUHH
  3. Publications
  4. Prompting techniques for secure code generation: a systematic investigation
 
Options

Prompting techniques for secure code generation: a systematic investigation

Citation Link: https://doi.org/10.15480/882.16085
Publikationstyp
Journal Article
Date Issued
2025-10-04
Sprache
English
Author(s)
Tony, Catherine  orcid-logo
Software Security E-22  
Díaz Ferreyra, Nicolás E.  orcid-logo
Software Security E-22  
Mutas, Markus  
Dhif, Salem  
Scandariato, Riccardo  
Software Security E-22  
TORE-DOI
10.15480/882.16085
TORE-URI
https://hdl.handle.net/11420/58459
Journal
ACM transactions on software engineering and methodology  
Volume
34
Issue
8
Start Page
1
End Page
53
Citation
ACM transactions on software engineering and methodology 34 (4): 1-53 (2025)
Publisher DOI
10.1145/3722108
Scopus ID
2-s2.0-105022479061
Publisher
Association for Computing Machinery (ACM)
Large Language Models (LLMs) are gaining momentum in software development with prompt-driven programming enabling developers to create code from Natural Language (NL) instructions. However, studies have questioned their ability to produce secure code and, thereby, the quality of prompt-generated software. Alongside, various prompting techniques that carefully tailor prompts have emerged to elicit optimal responses from LLMs. Still, the interplay between such prompting strategies and secure code generation remains underexplored and calls for further investigations. Objective: In this study, we investigate the impact of different prompting techniques on the security of code generated from NL instructions by LLMs. Method: First, we perform a systematic literature review to identify the existing prompting techniques that can be used for code generation tasks. A subset of these techniques are evaluated on GPT-3, GPT-3.5, and GPT-4 models for secure code generation. For this, we used an existing dataset consisting of 150 NL security-relevant code generation prompts. Results: Our work (i) classifies potential prompting techniques for code generation (ii) adapts and evaluates a subset of the identified techniques for secure code generation tasks, and (iii) observes a reduction in security weaknesses across the tested LLMs, especially after using an existing technique called Recursive Criticism and Improvement (RCI), contributing valuable insights to the ongoing discourse on LLM-generated code security.
Subjects
LLMs
secure code generation
prompt engineering
DDC Class
004: Computer Sciences
005: Computer Programming, Programs, Data and Security
Lizenz
https://creativecommons.org/licenses/by/4.0/
Publication version
publishedVersion
Loading...
Thumbnail Image
Name

3722108.pdf

Type

Main Article

Size

96.25 MB

Format

Adobe PDF

TUHH
Weiterführende Links
  • Contact
  • Send Feedback
  • Cookie settings
  • Privacy policy
  • Impress
DSpace Software

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science
Design by effective webwork GmbH

  • Deutsche NationalbibliothekDeutsche Nationalbibliothek
  • ORCiD Member OrganizationORCiD Member Organization
  • DataCiteDataCite
  • Re3DataRe3Data
  • OpenDOAROpenDOAR
  • OpenAireOpenAire
  • BASE Bielefeld Academic Search EngineBASE Bielefeld Academic Search Engine
Feedback