TUHH Open Research
Help
  • Log In
    New user? Click here to register.Have you forgotten your password?
  • English
  • Deutsch
  • Communities & Collections
  • Publications
  • Research Data
  • People
  • Institutions
  • Projects
  • Statistics
  1. Home
  2. TUHH
  3. Publications
  4. Parareal with a physics-informed neural network as coarse propagator
 
Options

Parareal with a physics-informed neural network as coarse propagator

Citation Link: https://doi.org/10.15480/882.8732
Publikationstyp
Conference Paper
Date Issued
2023
Sprache
English
Author(s)
Ibrahim, Abdul Qadir 
Mathematik E-10  
Götschel, Sebastian  orcid-logo
Mathematik E-10  
Ruprecht, Daniel  orcid-logo
Mathematik E-10  
TORE-DOI
10.15480/882.8732
TORE-URI
https://hdl.handle.net/11420/43729
First published in
Lecture notes in computer science  
Number in series
14100
Start Page
649
End Page
663
Citation
29th International Conference on Parallel and Distributed Computing (Euro-Par 2023)
Contribution to Conference
29th International Conference on Parallel and Distributed Computing, Euro-Par 2023  
Publisher DOI
10.1007/978-3-031-39698-4_44
Scopus ID
2-s2.0-85171532766
Publisher
Springer Nature Switzerland
ISBN
978-3-031-39697-7
Peer Reviewed
true
Parallel-in-time algorithms provide an additional layer of concurrency for the numerical integration of models based on time-dependent differential equations. Methods like Parareal, which parallelize across multiple time steps, rely on a computationally cheap and coarse integrator to propagate information forward in time, while a parallelizable expensive fine propagator provides accuracy. Typically, the coarse method is a numerical integrator using lower resolution, reduced order or a simplified model. Our paper proposes to use a physics-informed neural network (PINN) instead. We demonstrate for the Black-Scholes equation, a partial differential equation from computational finance, that Parareal with a PINN coarse propagator provides better speedup than a numerical coarse propagator. Training and evaluating a neural network are both tasks whose computing patterns are well suited for GPUs. By contrast, mesh-based algorithms with their low computational intensity struggle to perform well. We show that moving the coarse propagator PINN to a GPU while running the numerical fine propagator on the CPU further improves Parareal’s single-node performance. This suggests that integrating machine learning techniques into parallel-in-time integration methods and exploiting their differences in computing patterns might offer a way to better utilize heterogeneous architectures.
Subjects
GPUs
heterogeneous architectures
Machine learning
parallel-in-time integration
Parareal
PINN
MLE@TUHH
DDC Class
004: Computer Sciences
510: Mathematics
530: Physics
Funding(s)
TIME parallelisation: for eXascale computing and beyond  
Leistungsverbesserung des ICON-O Ozeanmodells auf heterogenen Exascale-Supercomputern mit Methoden des Maschinellen Lernens  
Lizenz
https://creativecommons.org/licenses/by/4.0/
Loading...
Thumbnail Image
Name

978-3-031-39698-4_44.pdf

Type

Main Article

Size

841.42 KB

Format

Adobe PDF

TUHH
Weiterführende Links
  • Contact
  • Send Feedback
  • Cookie settings
  • Privacy policy
  • Impress
DSpace Software

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science
Design by effective webwork GmbH

  • Deutsche NationalbibliothekDeutsche Nationalbibliothek
  • ORCiD Member OrganizationORCiD Member Organization
  • DataCiteDataCite
  • Re3DataRe3Data
  • OpenDOAROpenDOAR
  • OpenAireOpenAire
  • BASE Bielefeld Academic Search EngineBASE Bielefeld Academic Search Engine
Feedback