Files in this item

FilesDescriptionFormat

application/pdf

application/pdfSU-DISSERTATION-2017.pdf (927kB)
(no description provided)PDF

Description

Title:Defending distributed systems against adversarial attacks: consensus, consensus-based learning, and statistical learning
Author(s):Su, Lili
Director of Research:Vaidya, Nitin H.
Doctoral Committee Chair(s):Vaidya, Nitin H.
Doctoral Committee Member(s):Attiya, Hagit; Hajek, Bruce; Srikant, R.
Department / Program:Electrical & Computer Eng
Discipline:Electrical & Computer Engr
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:Ph.D.
Genre:Dissertation
Subject(s):Fault-tolerance
adversaries
security
consensus
optimization
hypothesis testing
statistical learning
Abstract:A distributed system consists of networked components that interact with each other in order to achieve a common goal. Given the ubiquity of distributed systems and their vulnerability to adversarial attacks, it is crucial to design systems that are provably secured. In this dissertation, we propose and explore the problems of performing consensus, consensus-based learning, and statistical learning in the presence of malicious components. (1) Consensus: In this dissertation, we explore the influence of communication range on the computability of reaching iterative approximate consensus. Particularly, we characterize the tight topological condition on the networks for consensus to be achievable in the presence of Byzantine components. Our results bridge the gap of previous work. (2) Consensus-Based Learning: We propose, to the best of our knowledge, consensus-based Byzantine-tolerant learning problems: Consensus-Based Multi-Agent Optimization and Consensus-Based Distributed Hypothesis Testing. For the former, we characterize the performance degradation, and design efficient algorithms that can achieve the optimal fault-tolerance performance. For the latter, we propose, as far as we know, the first learning algorithm under which the good agents can collaboratively identify the underlying truth. (3) Statistical Learning: Finally, we explore distributed statistical learning, where the distributed system is captured by the server-client model. We develop a distributed machine learning algorithm that is able to (1) tolerate Byzantine failures, (2) accurately learn a highly complex model with low local data volume, and (3) converge exponentially fast using logarithmic communication rounds.
Issue Date:2017-07-12
Type:Thesis
URI:http://hdl.handle.net/2142/98441
Rights Information:Copyright 2017 Lili Su
Date Available in IDEALS:2017-09-29
Date Deposited:2017-08


This item appears in the following Collection(s)

Item Statistics