Files in this item



application/pdfSUBAKAN-DISSERTATION-2018.pdf (7MB)
(no description provided)PDF


Title:Generative modeling of sequential data
Author(s):Subakan, Y. Cem
Director of Research:Smararagdis, Paris
Doctoral Committee Chair(s):Smararagdis, Paris
Doctoral Committee Member(s):Forsyth, David; Hasegawa-Johnson, Mark; Saatci, Yunus
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):Generative Modeling, Sequential Modeling, Generative Adversarial Networks, Probabilistic Modeling, Method of Moments
Abstract:In this thesis, we investigate various approaches for generative modeling, with a special emphasis on sequential data. Namely, we develop methodologies to deal with issues regarding representation (modeling choices), learning paradigm (e.g. maximum likelihood, method of moments, adversarial training), and optimization. For the representation aspect, we make the following contributions: -We argue that using a multi-modal latent representation (unlike popular methods such as variational autoencoders or generative adversarial networks) significantly enhances the generative model learning performance, as evidenced by the experiments we conduct on handwritten digit dataset (MNIST) and celebrity faces dataset (CELEB-A). -We prove that the standard factorial Hidden Markov model defined in the literature is not statistically identifiable. We propose two alternative identifiable models, and show their validity on unsupervised source separation examples. -We experimentally show that using a convolutional neural network architecture provides performance boost over time agnostic methods such as non-negative matrix factorization, and auto-encoders. -We experimentally show that using a recurrent neural network with a diagonal recurrent matrix increases the convergence speed and final accuracy of the model in most cases in a symbolic music modeling task. For the learning paradigm aspect, we make the following contributions: -We propose a method of moment based parameter learning framework for Hidden Markov Models (HMMs) with special transition structures such as mixture of HMMs, switching HMMs and HMMs with mixture emissions. -We propose a new generative model learning method which does approximate maximum likelihood parameter estimation for implicit generative models. -We argue that using an implicit generative model for audio source separation increases the performance over models which specify a cost function, such as NMF or autoencoders trained via maximum likelihood. We show performance improvement in speech mixtures created from the TIMIT dataset. For the optimization aspect, we make the following contributions: -We show that using the method of moment framework we propose in this thesis boosts the model performance when used as an initialization scheme for the expectation maximization algorithm. -We propose new optimization algorithms for identifiable alternatives to Factorial HMM. -We propose a two-step optimization algorithm for learning implicit generative models which efficiently learns multi-modal latent representations.
Issue Date:2018-04-13
Rights Information:Copyright 2018 Y. Cem Subakan
Date Available in IDEALS:2018-09-04
Date Deposited:2018-05

This item appears in the following Collection(s)

Item Statistics