Files in this item

FilesDescriptionFormat

application/pdf

application/pdfWANG-THESIS-2017.pdf (2MB)
(no description provided)PDF

Description

Title:Belief propagation generative adversarial networks
Author(s):Wang, Sifan
Advisor(s):Schwing, Alexander
Department / Program:Electrical & Computer Eng
Discipline:Electrical & Computer Engr
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:M.S.
Genre:Thesis
Subject(s):generative adversarial network
belief propagation
graphical model
penalty method
Abstract:Generative adversarial networks (GANs) are a class of generative models based on a minimax game. They have led to significant improvement in the field of unsupervised learning, especially image generation. However, most works in GANs are based on learning the distribution of the input dataset through a multi-layer neural network which does not explicitly model the structure of the input variables. This may work well in large and less noisy datasets, with the expectation that the learning procedure is able to assign relatively small weights to the occasional noise through averaging of many inputs. However, this approach potentially suffers when the input size is limited or noisy, resulting in reduced quality of generated samples by picking up spurious structures. In this thesis we propose a technique to model the structure of the variable interactions by incorporating graphical models in the generative adversarial network. The proposed framework produces samples by passing random inputs through a neural network to construct the local potentials in the graphical model; performing probabilistic inference in this graphical model then yields the marginal distribution. Message passing based on discrete variables keeps a table of local potential values, the size of which could be too big for natural images. We present a solution based on continuous variables with unary and pairwise Gaussian potentials, and perform probabilistic inference using loopy belief propagation on continuous Markov random fields. Experiments on the MNIST dataset show that our model is able to outperform vanilla GANs with more than two iterations of belief propagation.
Issue Date:2017-07-20
Type:Thesis
URI:http://hdl.handle.net/2142/98120
Rights Information:Copyright 2017 Sifan Wang
Date Available in IDEALS:2017-09-29
Date Deposited:2017-08


This item appears in the following Collection(s)

Item Statistics