Files in this item



application/pdfGhanem_Bernard.pdf (20MB)
(no description provided)PDF


Title:Dynamic textures: models and applications
Author(s):Ghanem, Bernard S.
Director of Research:Ahuja, Narendra
Doctoral Committee Chair(s):Ahuja, Narendra
Doctoral Committee Member(s):Hart, John C.; Forsyth, David A.; Huang, Thomas S.; Ma, Yi
Department / Program:Electrical & Computer Eng
Discipline:Electrical & Computer Engr
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):dynamic textures
image quality measure
Graph Cut
max margin distance learning
sparse coding
Iterated Conditional Modes
Abstract:Temporal or dynamic textures are video sequences that are spatially repetitive and temporally stationary. Dynamic textures are temporal analogs of the more common spatial texture. They are a family of visual phenomena where the texture elements, or the spatially repeating elements, undergo stochastic motions that are statistically similar. Dynamic textures appear in a vast spectrum of videos, ranging from sequences of moving water, foliage, smoke, and clouds to sequences of swarms of birds, robots, and even humans in crowds. Also, the applications concerning such video sequences are significant and numerous, including surveillance (e.g. monitoring traffic or crowds), detection of the onset of emergencies (e.g. outbreak of fires), and foreground and background separation (e.g. the transfer of a dynamic texture from one environment to another or simply dynamic texture removal). The study of dynamic textures poses numerous challenges, especially for traditional motion models that fail to capture the stochastic nature of dynamic textures. Despite their importance, the study of dynamic textures has just recently attracted the attention of the computer vision community. Most recent work on dynamic texture modeling represents these frames as the responses of a linear dynamical system (LDS) to noise. Despite its merits, this model has limitations because it attempts to model temporal variations in individual pixel intensities; such modeling does not take advantage of global motion coherence. In this dissertation, we will highlight the three main dimensions along which dynamic textures vary: the nature of the texture elements describing the dynamic texture spatially, their organization and layering, and their dynamics. We believe that no single spatiotemporal model can be proposed to handle dynamic textures sampled from this three-dimensional space. Instead, we propose three models, each of which applies to a certain ``range" of dynamic textures in this space. These three models by no means cover the whole space of dynamic textures; however, they build an essential framework or stepping-stone for future models. These models can be used in various applications, including dynamic texture synthesis, compression, recognition, and foreground and background layer separation. When possible, we compare the performance of these models to others in the literature in terms of recognition performance or computational efficiency. Developing these models and applying them to different problems uncovered several new and interesting problems that are also important to the fields of computer vision and image processing. We show how these new problems are generalized and efficiently addressed.
Issue Date:2010-05-19
Rights Information:Copyright 2010 Bernard S. Ghanem
Date Available in IDEALS:2010-05-19
Date Deposited:May 2010

This item appears in the following Collection(s)

Item Statistics