Options
Reinforcement learning-trained optimisers and Bayesian optimisation for online particle accelerator tuning
Citation Link: https://doi.org/10.15480/882.13143
Publikationstyp
Journal Article
Date Issued
2024-12-01
Sprache
English
TORE-DOI
Journal
Volume
14
Issue
1
Article Number
15733
Citation
Scientific Reports 14 (1): 15733 (2024)
Publisher DOI
Scopus ID
Publisher
Springer Nature
Online tuning of particle accelerators is a complex optimisation problem that continues to require manual intervention by experienced human operators. Autonomous tuning is a rapidly expanding field of research, where learning-based methods like Bayesian optimisation (BO) hold great promise in improving plant performance and reducing tuning times. At the same time, reinforcement learning (RL) is a capable method of learning intelligent controllers, and recent work shows that RL can also be used to train domain-specialised optimisers in so-called reinforcement learning-trained optimisation (RLO). In parallel efforts, both algorithms have found successful adoption in particle accelerator tuning. Here we present a comparative case study, assessing the performance of both algorithms while providing a nuanced analysis of the merits and the practical challenges involved in deploying them to real-world facilities. Our results will help practitioners choose a suitable learning-based tuning algorithm for their tuning tasks, accelerating the adoption of autonomous tuning algorithms, ultimately improving the availability of particle accelerators and pushing their operational limits.
Subjects
MLE@TUHH
DDC Class
621.3: Electrical Engineering, Electronic Engineering
Funding(s)
InternLabs-0011
Funding Organisations
Helmholtz-Gemeinschaft Deutscher Forschungszentren
Publication version
publishedVersion
Loading...
Name
s41598-024-66263-y.pdf
Type
Main Article
Size
2.29 MB
Format
Adobe PDF