TUHH Open Research
Help
  • Log In
    New user? Click here to register.Have you forgotten your password?
  • English
  • Deutsch
  • Communities & Collections
  • Publications
  • Research Data
  • People
  • Institutions
  • Projects
  • Statistics
  1. Home
  2. TUHH
  3. Publication References
  4. Systematic construction of continuous-time neural networks for linear dynamical systems
 
Options

Systematic construction of continuous-time neural networks for linear dynamical systems

Publikationstyp
Journal Issue
Date Issued
2025
Sprache
English
Author(s)
Datar, Chinmay  
Datar, Adwait  
Data Science Foundations E-21  
Dietrich, Felix  
Schilders, Wil  
TORE-URI
https://hdl.handle.net/11420/56917
Journal
SIAM journal on scientific computing  
Volume
47
Issue
4
Start Page
C820
End Page
C845
Citation
SIAM journal on scientific computing 47 (4): C820-C845 (2025)
Publisher DOI
https://doi.org/10.1137/24M1648946
Scopus ID
2-s2.0-105011600748
Publisher
SIAM
Discovering a suitable neural network architecture for modeling complex dynamical systems poses a formidable challenge, often involving extensive trial and error and navigation
through a high-dimensional hyperparameter space. In this paper, we discuss a systematic approach to constructing neural architectures for modeling a subclass of dynamical systems, namely, linear time-invariant (LTI) systems. We use a variant of continuous-time neural networks in which the output of each neuron evolves continuously as a solution of a first-order or second-order ordinary differential equation. Instead of deriving the network architecture and parameters from data, we propose a gradient-free algorithm to compute sparse architecture and network parameters directly from the given LTI system, leveraging its properties. We bring forth a novel neural architecture paradigm featuring horizontal hidden layers and provide insights into why employing conventional neural architectures with vertical hidden layers may not be favorable. We also provide an upper bound on the numerical errors of our neural networks. Finally, we demonstrate the high accuracy of our constructed networks on three numerical examples.
Subjects
neural architecture search | dynamical systems | efficient and sparse deep learning | continuous-time neural networks | scientific machine learning | back-propagation-free training
DDC Class
600: Technology
TUHH
Weiterführende Links
  • Contact
  • Send Feedback
  • Cookie settings
  • Privacy policy
  • Impress
DSpace Software

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science
Design by effective webwork GmbH

  • Deutsche NationalbibliothekDeutsche Nationalbibliothek
  • ORCiD Member OrganizationORCiD Member Organization
  • DataCiteDataCite
  • Re3DataRe3Data
  • OpenDOAROpenDOAR
  • OpenAireOpenAire
  • BASE Bielefeld Academic Search EngineBASE Bielefeld Academic Search Engine
Feedback