Files in this item

FilesDescriptionFormat

application/pdf

application/pdfCHEN-DISSERTATION-2018.pdf (57MB)Restricted to U of Illinois
(no description provided)PDF

Description

Title:Image processing and synthesis: From hand-crafted to data-driven modeling
Author(s):Chen, Chen
Director of Research:Do, Minh N.
Doctoral Committee Chair(s):Do, Minh N.
Doctoral Committee Member(s):Forsyth, David A.; Hart, John C.; Koltun, Vladlen; Schwing, Alexander
Department / Program:Electrical & Computer Eng
Discipline:Electrical & Computer Engr
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:Ph.D.
Genre:Dissertation
Subject(s):image processing
deep learning
image dehazing
image inpainting
low-light imaging
Abstract:This work investigates image and video restoration problems using effective optimization algorithms. First, we study the problem of single image dehazing to suppress artifacts in compressed or noisy images and videos. Our method is based on the linear haze model and minimizes the gradient residual between the input and output images. This successfully suppresses any new artifacts that are not obvious in the input images. Second, we propose a new method for image inpainting using deep neural networks. Given a set of training data, deep generate models can generate high-quality natural images following the same distribution. We search the nearest neighbor in the latent space of the deep generate models using a weighted context loss and prior loss. This code is then converted to the clean and uncorrupted image of the input. Third, we study the problem of recovering high-quality images from very noisy raw data captured in low-light conditions with short exposures. We build deep neural networks to learn the camera processing pipeline specifically for low-light raw data with an extremely low signal-to-noise ratio (SNR). To train the networks, we capture a new dataset of more than five thousand images with short-exposed and long-exposed pairs. Promising results are obtained compared with the traditional image processing pipeline. Finally, we propose a new method for extreme-low light video processing. The raw video frames are pre-processed using spatial-temporal denoising. A neural network is trained to move the error in the pre-processed data, learning to perform the image processing pipeline and encourage temporal smoothness of the output. Both quantitative and qualitative results demonstrate the proposed method significantly outperform the existing methods. It also paves the way for future research on this area.
Issue Date:2018-12-06
Type:Text
URI:http://hdl.handle.net/2142/102840
Rights Information:Copyright 2018 Chen Chen
Date Available in IDEALS:2019-02-07
Date Deposited:2018-12


This item appears in the following Collection(s)

Item Statistics