Files in this item

FilesDescriptionFormat

application/pdf

application/pdfSP20-ECE499-Thesis-Ma, Yuan.pdf (2MB)Restricted to U of Illinois
(no description provided)PDF

Description

Title:Accelerating convolution in deep neural networks on a CAPI-based FPGA
Author(s):Ma, Yuan
Contributor(s):Kindratenko, Volodymyr
Subject(s):FPGA
Deep Learning Acceleration
HLS
Hardware Design
CAPI-SNAP
Heterogeneous System
Abstract:Convolutional neural networks (CNNs) have emerged as a crucial part in many applications ranging from self-driving cars to voice-activated assistants. Numerous cloud computing providers, such as Amazon (AWS), IBM (SoftLayer), and Microsoft (Azure), choose to use heterogeneous computing systems to off-load the CNN computations from the CPU to a dedicated hard-ware since such hardware provides significant improvements in both computing throughput and energy savings. In this senior thesis, the author presents a weight-stationary systolic convolution kernel design for a field-programmable gate array (FPGA) and its implementation targeting Nallantech 250s+card that is enabled for the coherent accelerator processor interface (CAPI). CAPI is an interface for heterogeneous systems that allows accelerators to access I/O devices as CPU peers. Systolic array architecture has shown advantages for accelerators in tasks that involve vector dot-product calculations, such as matrix multiplication and convolution. The proposed hardware is synthesized by the High-Level Synthesis tool targeting Kintex UltraScale+XCKU15P FPGA and can provide high throughput (4.6×CPU) computations for real-time applications. In the future, the author plans to extend this kernel to a complete CNN by supporting CPU-FPGA task partitioning using the coherent memory space enabled by CAPI.
Issue Date:2020-05
Genre:Other
Type:Text
Language:English
URI:http://hdl.handle.net/2142/107265
Date Available in IDEALS:2020-06-12


This item appears in the following Collection(s)

Item Statistics