Files in this item



application/pdfChoi_Jaesik.pdf (2MB)
(no description provided)PDF


Title:Lifted Inference for Relational Hybrid Models
Author(s):Choi, Jaesik
Director of Research:Amir, Eyal
Doctoral Committee Chair(s):Amir, Eyal
Doctoral Committee Member(s):Roth, Dan; LaValle, Steven M.; Poole, David
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):Probabilistic Graphical Models
Relational Hybrid Models
Lifted Inference
First-Order Probabilistic Models
Probabilistic Logic
Kalman filter
Relational Kalman filter
Variational Learning, Markov Logic Networks
Abstract:Probabilistic Graphical Models (PGMs) promise to play a prominent role in many complex real-world systems. Probabilistic Relational Graphical Models (PRGMs) scale the representation and learning of PGMs. Answering questions using PRGMs enables many current and future applications, such as medical informatics, environmental engineering, financial forecasting and robot localizations. Scaling inference algorithms for large models is a key challenge for scaling up current applications and enabling future ones. This thesis presents new insights into large-scale probabilistic graphical models. It provides fresh ideas for maintaining a compact structure when answering questions or inferences about large, continuous models. The insights result in a key contribution, the Lifted Relational Kalman filter (LRKF), an efficient estimation algorithm for large-scale linear dynamic systems. It shows that the new relational Kalman filter enables scaling the exact vanilla Kalman filter from 1,000 to 1,000,000,000 variables. Another key contribution of this thesis is that it proves that typically used probabilistic first-order languages, including Markov Logic Networks (MLNs) and First-Order Probabilistic Models (FOPMs), can be reduced to compact probabilistic graphical representations under reasonable conditions. Specifically, this thesis shows that aggregate operators and the existential quantification in the languages are accurately approximated by linear constraints in the Gaussian distribution. In general, probabilistic first-order languages are transformed into nonparametric variational models where lifted inference algorithms can efficiently solve inference problems.
Issue Date:2012-06-27
Rights Information:Copyright 2012 Jaesik Choi
Date Available in IDEALS:2014-06-28
Date Deposited:2012-05

This item appears in the following Collection(s)

Item Statistics