Hensel, FabianFabianHenselBanerjee, AvikAvikBanerjeeEbrahimi, ElmiraElmiraEbrahimiSchulte, StefanStefanSchulte2025-12-192025-12-192025-10IEEE International Conference on Blockchain, Blockchain 2025https://hdl.handle.net/11420/60391Solidity smart contracts are widely used to implement decentralized applications. However, their development remains challenging due to the languageās domain-specific complexity, the immutability of deployed contracts, which prevents post-deployment fixes, and the high risk of introducing security-critical vulnerabilities. While Large Language Models (LLMs) have advanced code generation across general domains, they often struggle to meet the structural and security-specific demands of smart contract development. Therefore, this paper presents a domain-adapted code completion model trained on 22,000 labeled code constructs extracted from Solidity contracts. The model is built on a transformer-based architecture and fine-tuned using Quantized Low-Rank Adaptation (QLoRA), a parameter-efficient method. The dataset is processed to highlight secure coding patterns and structural semantics, enabling the model to learn from both preceding and succeeding contexts. Evaluation using perplexity, the Bilingual Evaluation Understudy (BLEU) score, and the Metric for Evaluation of Translation with Explicit Ordering (METEOR) shows significant improvements with consistent gains across all three metrics compared to the base model. These results demonstrate that targeted adaptation of language models can significantly enhance coding support in Solidity smart contracts.enblockchainsoliditysmart contractscode completionlarge language modelsComputer Science, Information and General Works::005: Computer Programming, Programs, Data and Security::005.7: DataAn AI-powered auto-completion tool for Solidity smart contractsConference Paper10.1109/blockchain67634.2025.00031Conference Paper