Files in this item

FilesDescriptionFormat

application/pdf

application/pdfDRYDEN-DISSERTATION-2019.pdf (2MB)Restricted Access
(no description provided)PDF

Description

Title:Large-scale training of deep neural networks
Author(s):Dryden, Nikoli Joseph
Director of Research:Snir, Marc
Doctoral Committee Chair(s):Snir, Marc
Doctoral Committee Member(s):Gropp, William; Hwu, Wen-mei; Van Essen, Brian; Schwing, Alexander
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:Ph.D.
Genre:Dissertation
Subject(s):High-performance computing
deep learning
convolutional neural network
parallel computing
machine learning
Abstract:Accelerating and scaling the training of deep neural networks (DNNs) is critical to keep up with growing datasets, reduce training times, and enable training on memory-constrained problems where parallelism is necessary. In this thesis, I present a set of techniques that can leverage large high-performance computing systems for fast training of DNNs. I first introduce a suite of algorithms to exploit additional parallelism in convolutional layers when training, expanding beyond the standard sample-wise data-parallel approach to include spatial parallelism and channel and filter parallelism. Next, I present optimizations to communication frameworks to reduce communication overheads at large scales. Finally, I discuss communication quantization, which can directly reduce communication volumes. In concert, these methods allow rapid training and enable training on problems that were previously infeasible.
Issue Date:2019-07-09
Type:Text
URI:http://hdl.handle.net/2142/105916
Rights Information:Copyright 2019 Nikoli Joseph Dryden
Date Available in IDEALS:2019-11-26
Date Deposited:2019-08


This item appears in the following Collection(s)

Item Statistics