Files in this item

FilesDescriptionFormat

application/pdf

application/pdfECE499-Sp2017-wang-Brian.pdf (685kB)Restricted to U of Illinois
(no description provided)PDF

Description

Title:Privacy in distributed machine learning
Author(s):Wang, Brian
Contributor(s):Vaidya, Nitin H.
Subject(s):privacy
distributed system
machine learning
convolutional neural network
MNIST
Abstract:This research explores ways to effectively use distributed machine learning while preserving privacy. Distributed learning was done on a client-server architecture where each client is an individual learner training on his or her own dataset and the server exchanges gradients between learners. Each learner used a convolutional neural network to learn the MNIST dataset (handwritten digits). Only gradients are exchanged instead of training data resulting in some privacy, but additional privacy was added by multiplying random weights to these gradients. The first step of the research was to replicate results from Shokri and Shmatikov’s 2015 paper on privacy-preserving deep Learning. Shokri and Shmatikov’s paper primarily worked with multiple clients and a single server. The research extends this architecture by creating a multiple server and multiple client architecture and adding random weights to the gradients. The research successfully produced empirical results showcasing the effectiveness of our distributed machine learning algorithm since it was able to maintain an acceptable classification accuracy while providing a certain degree of privacy. In addition to Shokri and Shmatikov’s paper, this research draws upon Gade and Vaidya’s 2016 paper “Distributed Optimization for Client-Server Architecture with Negative Gradient Weights”.
Issue Date:2017-05
Genre:Other
Type:Text
Language:English
URI:http://hdl.handle.net/2142/97890
Date Available in IDEALS:2017-08-28


This item appears in the following Collection(s)

Item Statistics