Files in this item

FilesDescriptionFormat

application/pdf

application/pdfLEE-DISSERTATION-2019.pdf (483kB)Restricted to U of Illinois
(no description provided)PDF

Description

Title:Robustness and generalization guarantees for statistical learning of generative models
Author(s):Lee, Jaeho
Director of Research:Raginsky, Maxim
Doctoral Committee Chair(s):Raginsky, Maxim
Doctoral Committee Member(s):Srikant, Rayadurgam; Veeravalli, Venugopal; Dokmanić, Ivan
Department / Program:Electrical & Computer Eng
Discipline:Electrical & Computer Engr
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:Ph.D.
Genre:Dissertation
Subject(s):statistical learning
minimax learning
learning a coding scheme
representation learning
Abstract:We apply tools from the classical statistical learning theory to analyze theoretical properties of modern machine learning problems that are typically phrased in the context of generative models. By combining standard methods based on the theory of empirical processes with ideas from optimal transport and signal recovery, we formally address the generalization and robustness guarantees for the existing and newly suggested algorithms. More specifically, we consider the following three problems: First, we tackle the problem of domain adaptation, where the training data and the test data are drawn from two distributions that are related but not identical. We devise an empirical risk minimization algorithm based on local worst-case risks, and provide generalization and excess risk guarantees of the learned hypothesis, that are robust to drifts in generative models. Second, we consider the learning of coding schemes, where the goal is to minimize the reconstruction risk of the original signal. It turns out that the task can be viewed as approximating the signal-generating distributions by pushforwards of arbitrary distributions via reconstruction maps. We provide learning guarantees based on the notions of optimal transport and classic statistical learning, using reconstruction errors as hypotheses. Third, we propose a framework of assessing representation learning algorithms by evaluating their estimation capabilities of the representation generating the signal. Using polyhedral estimates from the signal recovery literature, we investigate the provably near-optimal guarantees of the topic model.
Issue Date:2019-01-15
Type:Text
URI:http://hdl.handle.net/2142/104957
Rights Information:Copyright 2019 Jaeho Lee
Date Available in IDEALS:2019-08-23
Date Deposited:2019-05


This item appears in the following Collection(s)

Item Statistics