Kaiser, JanJanKaiserXu, ChenranChenranXuEichler, AnnikaAnnikaEichlerSantamaria Garcia, AndreaAndreaSantamaria GarciaStein, OliverOliverSteinBründermann, ErikErikBründermannKuropka, WilliWilliKuropkaDinter, HannesHannesDinterMayet, FrankFrankMayetVinatier, ThomasThomasVinatierBurkart, FlorianFlorianBurkartSchlarb, HolgerHolgerSchlarb2024-07-182024-07-182024-12-01Scientific Reports 14 (1): 15733 (2024)https://hdl.handle.net/11420/48370Online tuning of particle accelerators is a complex optimisation problem that continues to require manual intervention by experienced human operators. Autonomous tuning is a rapidly expanding field of research, where learning-based methods like Bayesian optimisation (BO) hold great promise in improving plant performance and reducing tuning times. At the same time, reinforcement learning (RL) is a capable method of learning intelligent controllers, and recent work shows that RL can also be used to train domain-specialised optimisers in so-called reinforcement learning-trained optimisation (RLO). In parallel efforts, both algorithms have found successful adoption in particle accelerator tuning. Here we present a comparative case study, assessing the performance of both algorithms while providing a nuanced analysis of the merits and the practical challenges involved in deploying them to real-world facilities. Our results will help practitioners choose a suitable learning-based tuning algorithm for their tuning tasks, accelerating the adoption of autonomous tuning algorithms, ultimately improving the availability of particle accelerators and pushing their operational limits.en2045-2322Scientific reports20241Springer Naturehttps://creativecommons.org/licenses/by/4.0/MLE@TUHHTechnology::621: Applied Physics::621.3: Electrical Engineering, Electronic EngineeringReinforcement learning-trained optimisers and Bayesian optimisation for online particle accelerator tuningJournal Article10.15480/882.1314310.1038/s41598-024-66263-y10.15480/882.13143Journal Article