Options
Large language models for human-machine collaborative particle accelerator tuning through natural language
Citation Link: https://doi.org/10.15480/882.14551
Publikationstyp
Journal Article
Date Issued
2025-01-03
Sprache
English
TORE-DOI
Journal
Volume
11
Issue
1
Article Number
eadr4173
Citation
Science advances 11 (1): eadr4173 (2025)
Publisher DOI
Scopus ID
Publisher
Assoc.
Autonomous tuning of particle accelerators is an active and challenging research field with the goal of enabling advanced accelerator technologies and cutting-edge high-impact applications, such as physics discovery, cancer research, and material sciences. A challenge with autonomous accelerator tuning remains that the most capable algorithms require experts in optimization and machine learning to implement them for every new tuning task. Here, we propose the use of large language models (LLMs) to tune particle accelerators. We demonstrate on a proof-of-principle example the ability of LLMs to tune an accelerator subsystem based on only a natural language prompt from the operator, and compare their performance to state-of-the-art optimization algorithms, such as Bayesian optimization and reinforcement learning-trained optimization. In doing so, we also show how LLMs can perform numerical optimization of a nonlinear real-world objective. Ultimately, this work represents another complex task that LLMs can solve and promises to help accelerate the deployment of autonomous tuning algorithms to day-to-day particle accelerator operations.
DDC Class
530: Physics
620: Engineering
519: Applied Mathematics, Probabilities
004: Computer Sciences
Publication version
publishedVersion
Loading...
Name
sciadv.adr4173.pdf
Type
Main Article
Size
1.6 MB
Format
Adobe PDF