Publisher DOI: 10.1016/j.micpro.2018.04.004
Title: Throughput optimizations for FPGA-based deep neural network inference
Language: English
Authors: Posewsky, Thorbjörn 
Ziener, Daniel 
Issue Date: Jul-2018
Source: Microprocessors and Microsystems (60): 151-161 (2018-07)
Journal or Series Name: Microprocessors and microsystems 
Abstract (english): Deep neural networks are an extremely successful and widely used technique for various pattern recognition and machine learning tasks. Due to power and resource constraints, these computationally intensive networks are difficult to implement in embedded systems. Yet, the number of applications that can benefit from the mentioned possibilities is rapidly rising. In this paper, we propose novel architectures for the inference of previously learned and arbitrary deep neural networks on FPGA-based SoCs that are able to overcome these limitations. Our key contributions include the reuse of previously transferred weight matrices across multiple input samples, which we refer to as batch processing, and the usage of compressed weight matrices, also known as pruning. An extensive evaluation of these optimizations is presented. Both techniques allow a significant mitigation of data transfers and speed-up the network inference by one order of magnitude. At the same time, we surpass the data throughput of fully-featured x86-based systems while only using a fraction of their energy consumption.
URI: http://hdl.handle.net/11420/2986
ISSN: 0141-9331
Institute: Eingebettete Systeme E-13 
Type: (wissenschaftlicher) Artikel
Appears in Collections:Publications without fulltext

Show full item record

Page view(s)

91
Last Week
0
Last month
0
checked on Oct 1, 2020

Google ScholarTM

Check

Add Files to Item

Note about this record

Export

Items in TORE are protected by copyright, with all rights reserved, unless otherwise indicated.