Files in this item



application/pdfZhaowen_Wang.pdf (15MB)
(no description provided)PDF


Title:Learning sparse representation for image signals
Author(s):Wang, Zhaowen
Director of Research:Huang, Thomas S.
Doctoral Committee Chair(s):Huang, Thomas S.
Doctoral Committee Member(s):Hasegawa-Johnson, Mark A.; Liang, Zhi-Pei; Nasrabadi, Nasser M.
Department / Program:Electrical & Computer Eng
Discipline:Electrical & Computer Engr
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):image representation
sparse coding
dictionary learning
image classification
opportunistic sensing
neural network
Partially Observable Markov Decision Process (POMDP)
Abstract:Natural images have the intrinsic property that they can be sparsely represented as a linear combination of a very small number of atomic signals from a complete basis or an overcomplete dictionary. This sparse representation prior has been successfully exploited in a variety of image processing applications, ranging from low level recovery to high level semantic inference. A good sparse representation is expected to have high fidelity to the observed image content and at the same time reveal the underlying structure and semantic information. In this dissertation, we address the problem of how to learn such representation or dictionary from training images, particularly for the tasks of super-resolution, classification, and opportunistic sensing. Image super-resolution is an ill-posed problem in which we want to recover the high-resolution image from the corresponding low-resolution image. We formulate a coupled dictionary learning algorithm which explicitly learns the transform between the high and low-resolution feature spaces such that the sparse representation inferred from a low-resolution patch can faithfully reconstruct its high-resolution version. The resulting bilevel optimization problem is solved using a stochastic gradient descent method with the gradient of sparse code found by implicit differentiation. A feed-forward deep neural network motivated by this sparse coding model is designed to further improve the efficiency and accuracy. The Sparse Representation-based Classification (SRC) has been used in many recognition tasks with the dictionary consisting of training data from all classes. We design a more compact and discriminative dictionary for SRC using the ``pulling'' and ``pushing'' actions inspired from Learning Vector Quantization (LVQ). The learned dictionary is applied to hyper-spectral image classification, with additional spatial neighborhood information incorporated using a probabilistic formulation. To better understand the rationale of SRC, we further develop a margin-based perspective into the classifier. The decision boundary and classification margin of SRC are analyzed in the local regions where the support of sparse code is stable. Based on the derived margin, we learn a discriminative dictionary with maximized margin between classes such that SRC can have better generalization capability. Opportunistic sensing deals with actively recognizing an image object with restricted sensing resources. Just as in compressive sensing, we show that dynamically optimized sensing operations (including but not limited to linear projections) can yield better classification results for signals with sparse structure. We develop a greedy sensing strategy using class entropy criteria, as well as a long-term policy learning method using the Partially Observable Markov Decision Process (POMDP) customized for heterogeneous resource constraints and discriminative classifiers. The sensing, recovery and recognition tasks studied in this dissertation exemplify a closed loop of general image processing, and we demonstrate that in each processing step a dictionary or a sensing operation adapted to signals' sparse characteristic can lead to remarkably improved performance.
Issue Date:2015-01-21
Rights Information:Copyright 2014 Zhaowen Wang
Date Available in IDEALS:2015-01-21
Date Deposited:2014-12

This item appears in the following Collection(s)

Item Statistics