Files in this item

FilesDescriptionFormat

application/pdf

application/pdfSoomin_Lee.pdf (635kB)
(no description provided)PDF

Description

Title:Optimization over networks: efficient algorithms and analysis
Author(s):Lee, Soomin
Director of Research:Nedich, Angelia
Doctoral Committee Chair(s):Nedich, Angelia
Doctoral Committee Member(s):Roth, Dan; Milenkovic, Olgica; Veeravalli, Venugopal V.
Department / Program:Electrical & Computer Eng
Discipline:Electrical & Computer Engr
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:Ph.D.
Genre:Dissertation
Subject(s):Distributed optimization
Random projections
Large-scale optimization
Machine learning
Support vector machines
Convergence analysis
Random gossip networks
Consensus
Multi-agent systems
Abstract:A number of important problems that arise in various application domains can be formulated as a distributed convex constrained minimization problem over a multi-agent network. The problem is usually defined as a sum of convex objective functions over an intersection of convex constraint sets. The first part of this thesis is focused on the development and analysis of efficient distributed algorithms for a constrained convex optimization problem over a multi-agent network where each agent has its own objective function and constraint set. We propose gradient descent algorithms with random projections which use various communication protocols. First, we present a distributed random projection (DRP) algorithm whereby each agent exchanges local information only with its immediate neighbors at each iteration. With reasonable assumptions, we prove that the iterates of all agents converge to the same point in the optimal set with probability 1. In addition, we consider a variant of the method that uses a mini-batch of consecutive random projections and establish its convergence. Experiments on distributed support vector machines demonstrate fast convergence of the DRP algorithm. It actually shows that the number of iterations required for convergence is much smaller than that for scanning over all training samples just once. Second, we propose an asynchronous gossip-based random projection (GRP) algorithm that solves the distributed problem using gossip type communications and local computations. We analyze the convergence properties of the algorithm for an uncoordinated diminishing stepsize and a constant stepsize. For a diminishing stepsize, we prove that the iterates of all agents converge to the same optimal point with probability 1. For a constant stepsize, we establish an error bound on the expected distance from the optimal point to the iterates of the algorithm. In addition, we consider a variant of the method that uses a mini-batch of consecutive random projections and, also, establish its convergence. Furthermore, we provide simulation results on a distributed robust model predictive control problem. In the second part of the thesis, we discuss an efficient epoch gradient descent algorithm for obtaining fast and exact solutions of linear support vector machines (SVMs). SVMs penalized with the popular hinge-loss are strongly convex but they do not have Lipschitz continuous gradient. We find SVMs that have both strong-convexity and Lipschitz continuous gradient using a smooth approximation technique.
Issue Date:2013-05-24
URI:http://hdl.handle.net/2142/44389
Rights Information:Copyright 2013 Soomin Lee
Date Available in IDEALS:2013-05-24
Date Deposited:2013-05


This item appears in the following Collection(s)

Item Statistics