Options
Learning to do or learning while doing: reinforcement learning and bayesian optimisation for online continuous tuning
Citation Link: https://doi.org/10.15480/882.13636
Publikationstyp
Preprint
Date Issued
2023-06-06
Sprache
English
Author(s)
TORE-DOI
Online tuning of real-world plants is a complex optimisation problem that continues to require manual intervention by experienced human operators. Autonomous tuning is a rapidly expanding field of research, where learning-based methods, such as Reinforcement Learning-trained Optimisation (RLO) and Bayesian optimisation (BO), hold great promise for achieving outstanding plant performance and reducing tuning times. Which algorithm to choose in different scenarios, however, remains an open question. Here we present a comparative study using a routine task in a real particle accelerator as an example, showing that RLO generally outperforms BO, but is not always the best choice. Based on the study's results, we provide a clear set of criteria to guide the choice of algorithm for a given tuning task. These can ease the adoption of learning-based autonomous tuning solutions to the operation of complex real-world plants, ultimately improving the availability and pushing the limits of operability of these facilities, thereby enabling scientific and engineering advancements.
Subjects
cs.LG
cs.AI
physics.acc-ph
MLE@TUHH
DDC Class
530: Physics
Loading...
Name
2306.03739v1.pdf
Type
Main Article
Size
2.57 MB
Format
Adobe PDF