We are inviting IDEALS users, both people looking for materials in IDEALS and those who want to deposit their work, to give us feedback on improving this service through an interview. Participants will receive a $20 VISA gift card. Please sign up via webform.

Files in this item



application/pdfInference with ... al Language Processing.pdf (579kB)
(no description provided)PDF


Title:Inference with Classifiers: A Study of Structured Output Problems in Natural Language Processing
Author(s):Punyakanok, Vasin
Subject(s):natural language processing
Abstract:A large number of problems in natural language processing (NLP) involve outputs with complex structure. Conceptually in such problems, the task is to assign values to multiple variables which represent the outputs of several interdependent components. A natural approach to this task is to formulate it as a two-stage process. In the first stage, the variables are assigned initial values using machine learning based programs. In the second, an inference procedure uses the outcomes of the first stage classifiers along with domain specific constraints in order to infer a globally consistent final prediction. This dissertation introduces a framework, inference with classifiers, to study such problems. The framework is applied to two important and fundamental NLP problems that involve complex structured outputs, shallow parsing and semantic role labeling. In shallow parsing, the goal is to identify syntactic phrases in sentences, which has been found useful in a variety of large-scale NLP applications. Semantic role labeling is the task of identifying predicate-argument structure in sentences, a crucial step toward a deeper understanding of natural language. In both tasks, we develop state-of-the-art systems which have been used in practice. In this framework, we have shown the significance of incorporating constraints into the inference stage as a way to correct and improve the decisions of the stand alone classifiers. Although it is clear that incorporating constraints into inference necessarily improves global coherency, there is no guarantee of the improvement in the performance measured in terms of the accuracy of the local predictions--the metric that is of interest for most applications. We develop a better theoretic understanding of this issue. Under a reasonable assumption, we prove a sufficient condition to guarantee that using constraints cannot degrade the performance with respect to Hamming loss. In addition, we provide an experimental study suggesting that constraints can improve performance even when the sufficient conditions are not fully satisfied.
Issue Date:2005-12
Genre:Technical Report
Other Identifier(s):UIUCDCS-R-2005-2626
Rights Information:You are granted permission for the non-commercial reproduction, distribution, display, and performance of this technical report in any format, BUT this permission is only for a period of 45 (forty-five) days from the most recent time that you verified that this technical report is still available from the University of Illinois at Urbana-Champaign Computer Science Department under terms that include this permission. All other rights are reserved by the author(s).
Date Available in IDEALS:2009-04-20

This item appears in the following Collection(s)

Item Statistics