Files in this item

FilesDescriptionFormat

application/pdf

application/pdfXU-DISSERTATION-2016.pdf (2MB)
(no description provided)PDF

Description

Title:Information-theoretic limitations of distributed information processing
Author(s):Xu, Aolin
Director of Research:Raginsky, Maxim
Doctoral Committee Chair(s):Raginsky, Maxim
Doctoral Committee Member(s):Hajek, Bruce; Milenkovic, Olgica; Srikant, Rayadurgam; Wu, Yihong
Department / Program:Electrical & Computer Eng
Discipline:Electrical & Computer Engr
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:Ph.D.
Genre:Dissertation
Subject(s):distributed information processing
fundamental limits
information theory
decentralized estimation
Bayes risk
small ball probability
strong data processing inequality
distributed function computation
computation time
network reduction
diameter of network
statistical learning
adaptive composition
stability of learning algorithms
generalization error
mutual information
Gibbs algorithm
adaptive data analytics
Abstract:In a generic distributed information processing system, a number of agents connected by communication channels aim to accomplish a task collectively through local communications. The fundamental limits of distributed information processing problems depend not only on the intrinsic difficulty of the task, but also on the communication constraints due to the distributedness. In this thesis, we reveal these dependencies quantitatively under information-theoretic frameworks. We consider three typical distributed information processing problems: decentralized parameter estimation, distributed function computation, and statistical learning under adaptive composition. For the first two problems, we derive converse results on the Bayes risk and the computation time, respectively. For the last problem, we first study the relationship between the generalization capability of a learning algorithm and its stability property measured by the mutual information between its input and output, and then derive achievability results on the generalization error of adaptively composed learning algorithms. In all cases, we obtain general results on the fundamental limits with respect to a general model of the problem, so that the results can be applied to various specific scenarios. Our information-theoretic analyses also provide general approaches to inferring global properties of a distributed information processing system from local properties of its components.
Issue Date:2016-11-30
Type:Thesis
URI:http://hdl.handle.net/2142/95358
Rights Information:Copyright 2016 Aolin Xu
Date Available in IDEALS:2017-03-01
Date Deposited:2016-12


This item appears in the following Collection(s)

Item Statistics