Files in this item



application/pdfSHAH-THESIS-2021.pdf (5MB)Restricted to U of Illinois
(no description provided)PDF


Title:Adversarial methods in machine learning - a federated defense and an attack
Author(s):Shah, Devansh
Advisor(s):Li, Bo
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):Machine Learning
Adversarial Learning
Federated Learning
Computer Vision
Abstract:Deep Neural networks have recently been shown to provide state-of-the-art results for several machine learning tasks, in computer vision and natural language processing applications. These developments make security aspects of machine learning increasingly important. Unfortunately, neural networks are vulnerable to adversarial examples — inputs that are almost indistinguishable from natural data and yet elicit misclassification from the network. The focus of this thesis is to investigate the space of adversarial examples in hitherto novel applications. We first study Adversarial Training(AT) which is a defense against adversarial examples, in a federated learning setup. Federated learning is a paradigm for multi-round model training over a distributed corpus of agent data. We propose FedDynAT, a novel algorithm for performing AT in a federated setting. Through extensive experimentation, we show that FedDynAT significantly improves both natural and adversarial accuracy, as well as model convergence time by reducing model drift. We next formulate an attack against 3D reconstruction models. While adversarial examples for 2D images and Convolutional Neural Networks have been extensively studied, less attention has been paid to attacking 3D reconstruction models. 3D reconstruction models have been widely applied to various domains, such as e-commerce, architecture, CAD, virtual reality, and medical processes. It is thereby of great importance to explore the vulnerabilities of such 3D models, and design methods to improve their robustness in practice. We propose a novel 3D Spatial-Pixel Joint Optimization attack (3D-SPJO) to generate adversarial 2D input against a 3D Reconstruction model, which reconstructs the attacker specified 3D voxelized grid. We conduct extensive ablation studies to evaluate 3D-SPJO on 3D-R2N2 and Pix2Vox models which are state-of-the-art 3D reconstruction models trained on the ShapeNet dataset.
Issue Date:2021-04-26
Rights Information:Copyright 2021 Devansh Shah
Date Available in IDEALS:2021-09-17
Date Deposited:2021-05

This item appears in the following Collection(s)

Item Statistics