Datar, ChinmayChinmayDatarDatar, AdwaitAdwaitDatarDietrich, FelixFelixDietrichSchilders, WilWilSchilders2025-08-052025-08-052025SIAM journal on scientific computing 47 (4): C820-C845 (2025)https://hdl.handle.net/11420/56917Discovering a suitable neural network architecture for modeling complex dynamical systems poses a formidable challenge, often involving extensive trial and error and navigation through a high-dimensional hyperparameter space. In this paper, we discuss a systematic approach to constructing neural architectures for modeling a subclass of dynamical systems, namely, linear time-invariant (LTI) systems. We use a variant of continuous-time neural networks in which the output of each neuron evolves continuously as a solution of a first-order or second-order ordinary differential equation. Instead of deriving the network architecture and parameters from data, we propose a gradient-free algorithm to compute sparse architecture and network parameters directly from the given LTI system, leveraging its properties. We bring forth a novel neural architecture paradigm featuring horizontal hidden layers and provide insights into why employing conventional neural architectures with vertical hidden layers may not be favorable. We also provide an upper bound on the numerical errors of our neural networks. Finally, we demonstrate the high accuracy of our constructed networks on three numerical examples.en1095-7197SIAM journal on scientific computing20254C820C845SIAMneural architecture search | dynamical systems | efficient and sparse deep learning | continuous-time neural networks | scientific machine learning | back-propagation-free trainingTechnology::600: TechnologySystematic construction of continuous-time neural networks for linear dynamical systemsJournal Issuehttps://doi.org/10.1137/24M1648946Other