Files in this item

FilesDescriptionFormat

application/pdf

application/pdfAbner_Guzman Rivera.pdf (26MB)
(no description provided)PDF

Description

Title:Multi-output structured learning
Author(s):Guzman Rivera, Abner
Director of Research:Rutenbar, Robin A.
Doctoral Committee Chair(s):Rutenbar, Robin A.
Doctoral Committee Member(s):Forsyth, David A.; Roth, Dan; Batra, Dhruv; Kohli, Pushmeet
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:Ph.D.
Genre:Dissertation
Subject(s):Structured Output Prediction
Structured Learning
Multi-Output Structured Learning
Multiple Outputs
Abstract:Real-world applications of Machine Learning (ML) require modeling and reasoning about complex, heterogeneous and high-dimensional data. Probabilistic Inference and Structured-Output Prediction (SOP) are frameworks within ML, which enable systems to learn and reason about complex output spaces by exploiting conditional independence assumptions. SOP systems are capable of coping with exponentially large numbers of possibilities, e.g., all segmentations of an image (i.e., labelings of every pixel with a semantic category); all English translations of a Chinese sentence; or all 3D configurations of a fixed-length sequence of (a priori unknown) amino acids. Indeed, SOP has led to state-of-the-art results in applications from various fields [Bakir et al., 2007]. Despite their success and generality, the application of SOP systems to real-world tasks is most severely limited by intractability issues. In brief, intractability is a consequence of high-order interactions in real-world phenomena. For this reason, researchers adopt performance-limiting simplifying assumptions (e.g., of conditional independence) within their models and forgo optimality guarantees in their inference algorithms. Learning SOP models from data is also intractable in general and thus, further approximations are introduced in the learning task. Additionally, labeled training data, is expensive and most often limited and biased. As a consequence of all of these difficulties, the SOP systems used in practice are plagued with limitations and inaccuracies. Further complicating the above is the fact that uncertainty is inherent to real-world applications for SOP, e.g., the data input to SOP systems is noisy, incomplete or otherwise ambiguous – in some cases, the input-output mapping is in effect one-to-many. As a result, the distributions over outputs we are interested to model are in general multi-modal. In this work, we propose to increase the expressivity and performance of SOP models by specifying and training models to produce fixed-size tuples of structured-outputs. We achieve this by constructing “portfolios” of structured prediction models that make independent predictions at test-time but that are trained jointly to produce sets of relevant and diverse hypotheses. In some sense, the motivation for decomposition in this thesis is akin to the spirit of mixture models or ensemble approaches. However, in this work we dispense with component weights and delay commitment to single predictions. In doing so, we advocate for pipelined approaches where multiple hypotheses are fed forward for refinement, aggregation, simulation, etc. or as inputs to increasingly complex predictive tasks. In these settings, it is often practical and advantageous for certain stages to be informed by higher order features (e.g., inter-hypothesis features), additional information available at test-time (e.g., generative procedure, temporal or textual context) or a user/expert in the loop. We show that our methods lead to predictions of higher accuracy compared to current methods and that we are able to leverage multiple predictions to outperform the state-of-the-art in end-to-end applications.
Issue Date:2014-09-16
URI:http://hdl.handle.net/2142/50456
Rights Information:Copyright 2014 by Abner Guzman Rivera. All rights reserved.
Date Available in IDEALS:2014-09-16
2016-09-22
Date Deposited:2014-08


This item appears in the following Collection(s)

Item Statistics