Files in this item



application/pdfKun_Deng.pdf (3MB)
(no description provided)PDF


Title:Model reduction of Markov chains with applications to building systems
Author(s):Deng, Kun
Director of Research:Mehta, Prashant G.
Doctoral Committee Chair(s):Mehta, Prashant G.
Doctoral Committee Member(s):Meyn, Sean P.; Beck, Carolyn L.; Dullerud, Geir E.; Salapaka, Srinivasa M.
Department / Program:Mechanical Sci & Engineering
Discipline:Mechanical Engineering
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):Model Reduction
Markov Chain
Hidden Markov Model
Building System
Information Theory
Control System
Abstract:Markov chain serves as an important modeling framework in applied science and engineering. e.g., Markov Chain Monte Carlo methods and Markov Decision Processes. A fundamental problem of Markov chain models is that the dimension of the problem could be very large in practice. This causes immense difficulties in manipulation and analysis of Markov chain models. Hence, the model reduction of Markov chains is an important problem that is relevant to many applications. In the first part of this thesis, the model reduction problem of Markov chains is studied and investigated. An information theoretic method is proposed to reduce Markov chains via the aggregation of states. The Kullback-Leibler (K-L) divergence rate, a commonly used pseudo metric in statistics and information theory, is employed to measure the differences between two Markov chains. The proposed framework reveals a connection to the spectral properties of Markov chains. In particular, the significance of the second eigenvector is explained in information theoretic terms for the first time. This result leads to a practical recursive model-reduction algorithm based on spectral analysis, and a limited set of error bounds for the model reduction of Markov chains. Besides using the spectral method, a simulation-based method is also proposed to perform state aggregation of the Markov chain. The main idea is to recast the model reduction problem as an infinite-horizon average cost optimal control problem. An optimal policy corresponds to an optimal aggregation of the state space. The optimal control problem is simplified in an approximate dynamic programming (ADP) approach. A relaxation of the policy space is performed, and based on this a parameterization of the set of optimal policies is introduced. This makes possible a stochastic approximation approach to compute the best policy within a given parameterized class. Convergence of stochastic approximation approach is established using the ODE method. The aggregation-based model reduction method of Markov chains is extended to hidden Markov models (HMM), which is a special Markov chain model with unobserved state process. Similarly, the state space is aggregated or partitioned to reduce the complexity of the HMM. The optimal aggregation is obtained by minimizing the K-L divergence rate between the laws of the observation process. The optimal aggregated HMM is given as a function of the partition function of the state space. The optimal partition is obtained by using a recursive stochastic approximation learning algorithm, which can be implemented through a single sample path of the HMM. Convergence of the algorithm is established using ergodicity of the filtering process and standard stochastic approximation arguments. In the second part of this thesis, the modeling and control problems of building systems are investigated. Firstly, A nonlinear resistor-capacitor (RC) network model of a multi-zone building is established to capture the building thermal dynamics. Then an aggregation-based model reduction method is proposed to preserve the structure of the building-thermal model, that is, the reduced building thermal model is still a nonlinear RC-network. This is achieved by obtaining super-nodes via aggregation, and determining the super-capacitance for each super-node and super resistance for each edge between two adjacent super-nodes. The aggregation-based approach proposed here is based on model reduction method of Markov chains that was described in the first part of this thesis. The main idea is to connect the linear portion of the multi-zone thermal model to a continuous-time Markov chain, and extend the model reduction framework for Markov chains to the nonlinear full-order building thermal model. A decentralized optimal control strategy is also proposed for a multi-zone building in the second part, where model complexity is mitigated by using a two pronged approach. First, we use the aggregation-based model reduction technique introduced in this part to construct a reduced-order model of the multi-zone building’s thermal dynamics. Second, we use the mean-field intuition from statistical mechanics so that the effect of other zones on a particular zone is captured though a mean-field model. Then the whole model does not have to be used in computing the controls over short time scales. A local optimal zonal control law is designed based on the local model of thermal dynamics, and its interaction with the building via the mean-field. The methodology is shown to yield distributed control laws that can be easily implemented on large-scale problems. In the third part of this thesis, the modeling, analysis, and control of occupancy evolution in a large building are studied. The main concern is efficient evacuation of a building in the event of emergency. Complexity arises due to building topology, uncertainty regarding the distribution of occupants, and the uncertain behavior of occupants. The relaxation techniques borrowed from queueing theory is employed to address complexity issues. These techniques are used to model occupancy evolution during evacuation, obtain lower bounds on evacuation time, and construct control policies to instruct occupants in order to efficiently evacuate the building. The control solutions are based on recent generalizations of the MaxWeight policy for decentralized routing. These results are illustrated with the aid of simulations using realistic building models.
Issue Date:2013-02-03
Rights Information:Copyright 2012 Kun Deng
Date Available in IDEALS:2013-02-03
Date Deposited:2012-12

This item appears in the following Collection(s)

Item Statistics