|Abstract:||This thesis considers optimization problems defined over a network of nodes, where each node knows only part of the objective functions. We are motivated by broad applications of these problems within engineering and sciences, where problems are characterized by either complex networks with a large number of nodes or massive amounts of data. Algorithms for solving these problems should be implemented in parallel between the nodes, and are based only on local computation and communication, necessitating the development of distributed algorithms.
Our interest, therefore, is to study distributed methods for solving networked optimization problems, where our focus is on distributed gradient algorithms. In particular, we move beyond the existing results to significantly enhance the performance and reduce the complexity of distributed gradient methods, while taking practical issues, such as communication delays and resource uncertainty, into account. Our goal is to bridge the gap between theory and practice, leading to significant improvement in their performance for solving real-world problems.
The remainder of this thesis is to focus on three main thrusts --- first, we study the impact of communication delays, an inevitable issue in distributed systems, on the performance of distributed gradient algorithms. Our results address a notable omission in the existing literature, where the delays are often ignored. Second, we study different variants of distributed gradient algorithms, and show that under certain conditions we can improve their convergence. Finally, we study an important problem within engineering and computer science, namely, network resource allocation. For solving this problem, we propose distributed Lagrangian methods and show that our methods are robust to resource uncertainty. In addition, we design a novel algorithm, namely, the distributed gradient balancing protocol, for solving a special case of network resource allocation problems. We show that our algorithm achieves a quadratic convergence time, which improves the convergence of the existing algorithms by a factor of n, the size of the network.