TUHH Open Research
Help
  • Log In
    New user? Click here to register.Have you forgotten your password?
  • English
  • Deutsch
  • Communities & Collections
  • Publications
  • Research Data
  • People
  • Institutions
  • Projects
  • Statistics
  1. Home
  2. TUHH
  3. Publication References
  4. Error-free transformations of matrix multiplication by using fast routines of matrix multiplication and its applications
 
Options

Error-free transformations of matrix multiplication by using fast routines of matrix multiplication and its applications

Publikationstyp
Journal Article
Date Issued
2011-06-14
Sprache
English
Author(s)
Ozaki, Katsuhisa  
Ogita, Takeshi  
Oishi, Shin’ichi  
Rump, Siegfried M.  orcid-logo
Institut
Zuverlässiges Rechnen E-19  
TORE-URI
http://hdl.handle.net/11420/7914
Journal
Numerical Algorithms  
Volume
59
Issue
1
Start Page
95
End Page
118
Citation
Numerical Algorithms 1 (59): 95-118 (2012)
Publisher DOI
10.1007/s11075-011-9478-1
Scopus ID
2-s2.0-82255175133
Publisher
Baltzer
This paper is concerned with accurate matrix multiplication in floating-point arithmetic. Recently, an accurate summation algorithm was developed by Rump et al. (SIAM J Sci Comput 31(1):189-224, 2008). The key technique of their method is a fast error-free splitting of floating-point numbers. Using this technique, we first develop an error-free transformation of a product of two floating-point matrices into a sum of floating-point matrices. Next, we partially apply this error-free transformation and develop an algorithm which aims to output an accurate approximation of the matrix product. In addition, an a priori error estimate is given. It is a characteristic of the proposed method that in terms of computation as well as in terms of memory consumption, the dominant part of our algorithm is constituted by ordinary floating-point matrix multiplications. The routine for matrix multiplication is highly optimized using BLAS, so that our algorithms show a good computational performance. Although our algorithms require a significant amount of working memory, they are significantly faster than 'gemmx' in XBLAS when all sizes of matrices are large enough to realize nearly peak performance of 'gemm'. Numerical examples illustrate the efficiency of the proposed method.
Subjects
Accurate computations
Error-free transformation
Floating-point arithmetic
Matrix multiplication
DDC Class
004: Informatik
510: Mathematik
More Funding Information
Japan Society for the Promotion of Science 23700023.
TUHH
WeiterfĂĽhrende Links
  • Contact
  • Send Feedback
  • Cookie settings
  • Privacy policy
  • Impress
DSpace Software

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science
Design by effective webwork GmbH

  • Deutsche NationalbibliothekDeutsche Nationalbibliothek
  • ORCiD Member OrganizationORCiD Member Organization
  • DataCiteDataCite
  • Re3DataRe3Data
  • OpenDOAROpenDOAR
  • OpenAireOpenAire
  • BASE Bielefeld Academic Search EngineBASE Bielefeld Academic Search Engine
Feedback