Files in this item



application/pdfCOURTNEY-DISSERTATION-2020.pdf (4MB)
(no description provided)PDF


Title:Learning from videos with deep convolutional LSTM networks
Author(s):Courtney, Logan
Director of Research:Sreenivas, Ramavarapu
Doctoral Committee Chair(s):Sreenivas, Ramavarapu
Doctoral Committee Member(s):Sirignano, Justin; Hasegawa-Johnson, Mark; Beck, Carolyn
Department / Program:Industrial&Enterprise Sys Eng
Discipline:Systems & Entrepreneurial Engr
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):Convolutional LSTM
Convolutional neural network
Recurrent neural network
Deep learning
Computer vision
Receptive field
Artificial Intelligence
Machine Learning
Abstract:Many methods for learning from video sequences involve temporally processing 2D CNN features from the individual frames or directly utilizing 3D convolutions within high-performing 2D CNN architectures. The focus typically remains on how to incorporate the temporal processing within an already stable spatial architecture. This research explores the use of convolution LSTMs to simultaneously learn spatial- and temporal-information in videos. A deep network of convolutional LSTMs allows the model to access the entire range of temporal information at all spatial scales of the data. This work first constructs an MNIST-based video dataset with parameters controlling relevant facets of common video-related tasks: classification, ordering, and speed estimation. Models trained on this dataset are shown to differ in key ways depending on the task and their use of 2D convolutions, 3D convolutions, or convolutional LSTMs. An empirical analysis indicates a complex, interdependent relationship between the spatial and temporal dimensions with design choices having a large impact on a network's ability to learn the appropriate spatiotemporal features. In addition, experiments involving convolution LSTMs for action recognition and lipreading demonstrate the model is capable of selectively choosing which spatiotemporal scales are most relevant for a particular dataset. The proposed deep architecture also holds promise in other applications where spatiotemporal features play a vital role without having to specifically cater the design of the network for the particular spatiotemporal features existent within the problem. Our model has comparable performance with the current state of the art achieving 83.4% on the Lip Reading in the Wild (LRW) dataset. Additional experiments indicate convolutional LSTMs may be particularly data hungry considering the large performance increases when fine-tuning on LRW after pretraining on larger datasets like LRS2 (85.2%) and LRS3-TED (87.1%). However, a sensitivity analysis providing insight on the relevant spatiotemporal temporal features allows certain convolutional LSTM layers to be replaced with 2D convolutions decreasing computational cost without performance degradation indicating their usefulness in accelerating the architecture design process when approaching new problems.
Issue Date:2020-11-23
Rights Information:Copyright 2020 Logan Courtney
Date Available in IDEALS:2021-03-05
Date Deposited:2020-12

This item appears in the following Collection(s)

Item Statistics