Files in this item
Files  Description  Format 

application/pdf TAGHVAEIDISSERTATION2019.pdf (1MB)  (no description provided) 
Description
Title:  Design and analysis of particlebased algorithms for nonlinear filtering and sampling 
Author(s):  Taghvaei, Amirhossein 
Director of Research:  Mehta, Prashant 
Doctoral Committee Chair(s):  Mehta, Prashant 
Doctoral Committee Member(s):  Hajek, Bruce; Laugesen, Richard; Raginsky, Maxim 
Department / Program:  Mechanical Sci & Engineering 
Discipline:  Mechanical Engineering 
Degree Granting Institution:  University of Illinois at UrbanaChampaign 
Degree:  Ph.D. 
Genre:  Dissertation 
Subject(s):  Nonlinear filtering
MonteCarlo methods Meanfield optimal control Optimal transportation 
Abstract:  This thesis is concerned with the design and analysis of particlebased algorithms for two problems: (i) the nonlinear filtering problem; (ii) and the problem of sampling from a target distribution. The contributions for these two problems appear in Part I and Part~II of the thesis. For the nonlinear filtering problem, the focus is on the feedback particle filter (FPF) algorithm. In the FPF algorithm, the empirical distribution of the particles is used to approximate the posterior distribution of the nonlinear filter. The particle update is implemented using a feedback control law that is designed such that the distribution of the particles, in the meanfield limit, is exactly equal to the posterior distribution. In Part I of this thesis, three separate problems related to the FPF methodology and algorithm are addressed. The first problem, addressed in Chapter 2 of the thesis, is concerned with gain function approximation in the FPF algorithm. The exact gain function is the solution of a Poisson equation involving a probabilityweighted Laplacian. The numerical problem is to approximate this solution using only particles sampled from the probability distribution. A diffusion mapbased algorithm is presented for this problem. The algorithm is based on a reformulation of the Poisson equation as a fixedpoint equation that involves the diffusion semigroup corresponding to the weighted Laplacian. The fixedpoint problem is approximated with a finitedimensional problem in two steps: In the first step, the semigroup is approximated with a Markov operator referred to as diffusion map. In the second step, the diffusion map is approximated empirically, using particles, as a Markov matrix. A procedure for carrying out error analysis of the approximation is introduced and certain asymptotic estimates for bias and variance error are derived. Some comparative numerical experiments are performed for a problem with nonGaussian distribution. The algorithm is applied and illustrated for a numerical filtering example. As part of the error analysis, some new results about the diffusion map approximation are obtained. These include (i) new error estimates between the diffusion map and the exact semigroup, based on the FeynmanKac representation of the semigroup; (ii) a spectral gap for the diffusion map, based on the FosterLyapunov function method from the theory of stability of Markov processes; (ii) and error estimates for the empirical approximation of the diffusion map. The second problem, addressed in Chapter 3 of the thesis, is motivated by the socalled uniqueness issue of FPF control law. The control law in FPF is designed such that the distribution of the particles, in the meanfield limit, is exactly equal to the posterior distribution. However, it has been noted in the literature that the FPF control law is not unique. The objective of this research is to describe a systematic approach to obtain a unique control law for FPF. In Chapter 3, the optimality criteria from optimal transportation theory is used to identify a unique control law for FPF, in the linear Gaussian setting. Two approaches are outlined. The first approach is based on a timestepping optimization procedure. We consider a time discretization of the problem, construct a discretetime stochastic process by composition of sequence of optimal transport maps between the posterior distributions of two consecutive time instants, and then take the limit as the time stepsize goes to zero. We obtain explicit formula for the resulting control law in the linear Gaussian setting. The control law is deterministic and requires the covariance matrix of the resulting stochastic process to be invertible. We present an alternative approach, which allows for singular covariance matrices. The resulting control law has additional stochastic terms, which vanish when the covariance matrix is nonsingular. The second construction is important for finiteN implementation of the algorithm, where the empirical covariance matrix might be singular. The third problem, addressed in Chapter 4, is concerned with the convergence and the error analysis for FPF algorithm. It is known that in the meanfield limit, the distribution of the particles is equal to the posterior distribution. However little is known about the convergence of the finiteN algorithm to the meanfield limit. We consider the linear Gaussian setting, and study two types of FPF algorithm: The deterministic linear FPF and the stochastic linear FPF. The important question in the linear Gaussian setting is about convergence of the empirical mean and covariance of the particles to the exact mean and covariance given by the Kalman filter. We derive equations for the evolution of empirical mean and covariance for the finiteN system for both algorithms. Remarkably, for the deterministic linear FPF, the equations for the mean and variance are identical to the Kalman filter. This allows strong conclusions on convergence and error properties under the assumption that the linear system is controllable and observable. It is shown that the error converges to zero even with finite number of particles. For the stochastic linear FPF, the equations for the empirical mean and covariance involve additional stochastic terms. Error estimates are obtained for the empirical mean and covariance under the stronger assumption that the linear system is stable and fully observable. We also presents propagation of chaos error analysis for both algorithms. The Part 2 of the thesis is concerned with the sampling problem, where the objective is to sample from a unnormalized target probability distribution. The problem is formulated as an optimization problem on the space of probability distributions, where the objective is to minimize the relative entropy with respect to the target distribution. The gradient flow with respect to the Riemannian metric induced by the Wasserstein distance, is known to lead to Markov Chain MonteCarlo (MCMC) algorithms based on the Langevin equation. The main contribution is to present a methodology and numerical algorithms for constructing accelerated gradient flows on the space of probability distributions. In particular, the recent variational formulation of accelerated methods in (Wibinoso et al., 2016) is extended from vector valued variables to probability distributions. The variational problem is modeled as a meanfield optimal control problem. The maximum principle of optimal control theory is used to derive Hamilton's equations for the optimal gradient flow. A quantitative estimate on the asymptotic convergence rate is provided based on a Lyapunov function construction, when the objective functional is displacement convex. Two numerical approximations are presented to implement the Hamilton's equations as a system of N interacting particles. The algorithm is numerically illustrated and compared with the MCMC and Hamiltonian MCMC algorithms. 
Issue Date:  20190712 
Type:  Text 
URI:  http://hdl.handle.net/2142/105673 
Rights Information:  Copyright 2019 Amirhossein Taghvaei 
Date Available in IDEALS:  20191126 
Date Deposited:  201908 
This item appears in the following Collection(s)

Dissertations and Theses  Mechanical Science and Engineering

Graduate Dissertations and Theses at Illinois
Graduate Theses and Dissertations at Illinois