Structural consistency for diverse video colorization
Wang, Alan
This item is only available for download by members of the University of Illinois community. Students, faculty, and staff at the U of I may log in with your NetID and password to view the item. If you are trying to access an Illinois-restricted dissertation or thesis, you can request a copy through your library's Inter-Library Loan office or purchase a copy directly from ProQuest.
Permalink
https://hdl.handle.net/2142/104040
Description
Title
Structural consistency for diverse video colorization
Author(s)
Wang, Alan
Contributor(s)
Schwing, Alexander
Issue Date
2019-05
Keyword(s)
Colorization
Variational Autoencoder
Gaussian Conditional Random Field
Abstract
Colorizing gray-level videos is an important task in the media and advertising
industry. Intelligently learning believable and structurally-consistent
colorings over large, intractable video spaces poses several problems. Firstly,
there is a lack of proper datasets for training. Secondly, there is ambiguity
inherent in colorization due to many shades being often plausible. Also, one
of the most obvious artifacts, structural inconsistency, is rarely considered by
existing methods which predict chrominance independently for every pixel.
We address all of the above-mentioned challenges in two ways. First, we generate
a diverse video colorization dataset by editing scenes and manipulating
textures from the Grand Theft Auto V video game. Second, we propose
models for diverse and structurally-consistent video colorization, which uses
a conditional random field based variational autoencoder formulation (VAEGCRF).
We show our results using the generated dataset, and compare them
to several baseline models.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.