Faulwasser, TimmTimmFaulwasserHempel, Arne-JensArne-JensHempelStreif, StefanStefanStreif2024-11-152024-11-152024-12-01IFAC Journal of Systems and Control 30: 100290 (2024-12-01)https://hdl.handle.net/11420/51829It is well-known that the training of Deep Neural Networks (DNN) can be formalized in the language of optimal control. In this context, this paper leverages classical turnpike properties of optimal control problems to attempt a quantifiable answer to the question of how many layers should be considered in a DNN. The underlying assumption is that the number of neurons per layer—i.e., the width of the DNN—is kept constant. Pursuing a different route than the classical analysis of approximation properties of sigmoidal functions, we prove explicit bounds on the required depths of DNNs based on asymptotic reachability assumptions and a dissipativity-inducing choice of the regularization terms in the training problem. Numerical results obtained for the two spiral task data set for classification indicate that the proposed constructive estimates can provide non-conservative depth bounds.en2468-6018IFAC journal of systems and control2024Elsevierhttps://creativecommons.org/licenses/by/4.0/Artificial neural networksDeep learningDissipativityMachine learningTurnpike propertiesMLE@TUHHComputer Science, Information and General Works::003: Systems TheoryComputer Science, Information and General Works::006: Special computer methodsNatural Sciences and Mathematics::519: Applied Mathematics, ProbabilitiesOn the turnpike to design of deep neural networks : explicit depth boundsJournal Article10.15480/882.1365810.1016/j.ifacsc.2024.10029010.15480/882.13658Journal Article