Files in this item

FilesDescriptionFormat

application/pdf

application/pdfRAVISHANKAR_SAIPRASAD.pdf (15MB)
(no description provided)PDF

Description

Title:Adaptive sparse representations and their applications
Author(s):Ravishankar, Saiprasad
Director of Research:Bresler, Yoram
Doctoral Committee Chair(s):Bresler, Yoram
Doctoral Committee Member(s):Do, Minh N.; Sutton, Brad; Milenkovic, Olgica; Fessler, Jeffrey A
Department / Program:Electrical & Computer Eng
Discipline:Electrical & Computer Engr
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:Ph.D.
Genre:Dissertation
Subject(s):Inverse problems
Computer vision
Classification
Structured overcomplete transform learning
Union of transforms
Overcomplete transform learning
Structured transforms
Convex formulation
Real-time applications
Big data
Online learning
Adaptive Sampling
Image reconstruction
Block Coordinate descent
Blind compressed sensing
Convergence guarantees
Efficient updates
Closed-form solutions
Machine learning
Nonconvex optimization
Alternating minimization
Doubly sparse transform learning
Square transform learning
Adaptive sparse models
Denoising
dictionary learning
Magnetic resonance imaging
Compressed sensing
Sparse representations
Sparsifying transform learning
Abstract:The sparsity of signals and images in a certain transform domain or dictionary has been exploited in many applications in signal processing, image processing, and medical imaging. Analytical sparsifying transforms such as Wavelets and DCT have been widely used in compression standards. Recently, the data-driven learning of synthesis sparsifying dictionaries has become popular especially in applications such as denoising, inpainting, and compressed sensing. While there has been extensive research on learning synthesis dictionaries and some recent work on learning analysis dictionaries, the idea of learning sparsifying transforms has received no attention. In the first part of this thesis, we study the sparsifying transform model and its relationship to prior linear sparse models. Then, we propose novel problem formulations for learning square sparsifying transforms from data. The proposed algorithms for transform learning alternate between a sparse coding step and a transform update step, and are highly efficient. Specifically, as opposed to sparse coding in the synthesis or noisy analysis models which is NP-hard, the sparse coding step in transform learning can be performed exactly and cheaply by zeroing out all but a certain number of nonzero transform coefficients of largest magnitude. The transform update step is performed using iterative conjugate gradients. The proposed algorithms give rise to well-conditioned square sparsifying transforms in practice. We show the superiority of our approach over analytical sparsifying transforms such as the DCT for signal and image representation. We also show promising performance in signal denoising using the learned sparsifying transforms. The proposed approach is much faster than previous approaches involving learned synthesis, or analysis dictionaries. Next, we explore a specific structure for learned sparsifying transforms, that enables efficient implementations. Following up on the idea of learning square sparsifying transforms, we propose novel problem formulations for learning doubly sparse transforms for signals or image patches. These transforms are a product of a fixed, fast analytic transform such as the DCT, and an adaptive matrix constrained to be sparse. Such transforms can be learned, stored, and implemented efficiently. We show the superior promise of our learned doubly sparse transforms as compared to analytical sparsifying transforms such as the DCT or Wavelets for image representation. Adapted doubly sparse transforms also generalize better than the ‘unstructured’ (or non-sparse) transform. We show promising performance and speedups in image denoising using the learned doubly sparse transforms compared to approaches involving learned synthesis dictionaries such as the K-SVD algorithm. In the third part of this thesis, we further develop the alternating algorithms for learning unstructured (non-sparse) well-conditioned, or orthonormal square sparsifying transforms. While, in the first part of the thesis, we provided an iterative method involving conjugate gradients for the transform update step, in this part, we instead derive efficient and analytical closed-form solutions for transform update. Importantly, we establish that the proposed algorithms are globally convergent to the set of local minimizers of the non-convex transform learning problems. In practice, our algorithms are shown to be insensitive to initialization. In the next part of the thesis, we focus on compressed sensing (CS), which exploits the sparsity of images or image patches in a transform domain or synthesis dictionary to reconstruct images from highly undersampled or compressive measurements. Specifically, we focus on the subject of blind compressed sensing, where the underlying sparsifying transform is unknown a priori, and propose a framework to simultaneously reconstruct the underlying image(s)/volume(s) as well as the square sparsifying transform from highly undersampled measurements. The proposed block coordinate descent type algorithms involve highly efficient closed-form optimal updates. Importantly, we prove that although the proposed blind compressed sensing formulations are highly nonconvex, our algorithms converge to the set of critical points of the objectives defining the formulations. We illustrate the usefulness of the proposed framework for magnetic resonance image (MRI) reconstruction from highly undersampled k-space measurements. As compared to previous state-of-the-art methods involving the synthesis model, our approach is 10x faster for reconstructing 2D MR images, while also providing promising reconstruction quality. The proposed transform-based blind compressed sensing has the potential to revolutionize medical imaging technologies by highly accelerating both the imaging and image reconstruction processes. In the fifth part of this thesis, we study the design of sampling schemes for compressed sensing MRI. The (pseudo) random sampling schemes used most often for CS may have good theoretical asymptotic properties; however, with limited data they may be far from optimal. Therefore, we propose a novel framework for improved adaptive sampling schemes for highly undersampled CS MRI. While the proposed framework is general, we apply it with some recent MRI reconstruction algorithms. Numerical experiments demonstrate that our adaptive sampling scheme can provide significant improvements in image reconstruction quality for MRI compared to non-adapted methods. In the next part of the thesis, we develop a methodology for online learning of square sparsifying transforms. Such online learning is particularly useful when dealing with big data, and for signal processing applications such as real-time sparse representation and denoising. The proposed transform learning algorithms are shown to have a much lower computational cost than online synthesis dictionary learning. In practice, the sequential learning of a sparsifying transform typically converges much faster than batch mode transform learning. Preliminary experiments show the usefulness of the proposed schemes for sparse representation (compression), and denoising. We also prove that although the associated optimization problems are non-convex, our online transform learning algorithms are guaranteed to converge to the set of stationary points of the learning problem. The guarantee relies on few (easy to verify) assumptions. In the seventh part of this thesis, we propose a novel convex formulation for doubly sparse square transform learning. The proposed formulation has similarities to traditional least squares optimization with $\ell_1$ regularization. Our convex learning algorithm is a modification of FISTA, and is guaranteed to converge to a global optimum, and moreover converges quickly. We also study two non-convex variants of the proposed convex formulation, and provide local convergence proof for the algorithm for one of them. These proposed non-convex variants use the $\ell_0$ ``norm" for measuring the sparsity of the transform and/or sparse code. We show the superior promise of our learned transforms here as compared to analytical sparsifying transforms such as the DCT for image representation. In these examples, the performance is sometimes comparable to the previously proposed non-convex (non guaranteed) doubly sparse transform learning schemes. While we studied the learning of square transforms in the initial parts of the thesis, in the eighth part of the thesis, we instead briefly study the learning of tall or overcomplete sparsifying transforms from data. We propose various penalties that control the sparsifying ability, condition number, and incoherence of the learned transforms. Our alternating algorithm for overcomplete transform learning converges empirically, and significantly improves the quality of the learned transform over the iterations. We present examples demonstrating the promising performance of adaptive overcomplete transforms over adaptive overcomplete synthesis dictionaries learned using the popular K-SVD algorithm, in the application of image denoising. The overcomplete transforms also denoise better than adaptive square transforms. In the final part of the thesis, we explore the idea of learning efficient structured overcomplete sparsifying transforms. Since natural images typically contain diverse textures that cannot be sparsified well by a single transform, we therefore propose a union of sparsifying transforms model. Sparse coding in this model reduces to a form of transform-domain clustering. This makes the model appealing for classification tasks. The proposed model is also equivalent to a structured overcomplete sparsifying transform model with block cosparsity, dubbed OCTOBOS. The alternating algorithm introduced for learning such transforms involves simple closed-form solutions. A theoretical analysis provides a convergence guarantee for this algorithm. It is shown to be globally convergent to the set of partial minimizers of the non-convex OCTOBOS (or, union of transforms) learning problem. We also show that under certain conditions, the algorithm converges to the set of stationary points of the overall objective. When applied to images, the algorithm learns a collection of well-conditioned square transforms, and a good clustering of patches or textures. The resulting sparse representations for the images are much better than those obtained with a single learned transform, or with analytical transforms. We show the promising performance of the proposed approach in image denoising, which compares quite favorably with approaches involving a single learned square transform or an overcomplete synthesis dictionary, or Gaussian mixture models. The proposed denoising method is also faster than the synthesis dictionary based approach.
Issue Date:2014-12-05
Type:Thesis
URI:http://hdl.handle.net/2142/88322
Rights Information:Copyright 2014 Saiprasad Ravishankar
Date Available in IDEALS:2015-09-29
Date Deposited:December 2


This item appears in the following Collection(s)

Item Statistics