Files in this item

FilesDescriptionFormat

application/pdf

application/pdfAGARWAL-THESIS-2019.pdf (68MB)
(no description provided)PDF

Description

Title:Adversarial attacks and defenses for generative models
Author(s):Agarwal, Rishika
Advisor(s):Koyejo, Sanmi; Li, Bo
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:M.S.
Genre:Thesis
Subject(s):Adversarial Machine learning, generative models, attacks, defenses
Abstract:Adversarial Machine learning is a field of research lying at the intersection of Machine Learning and Security, which studies vulnerabilities of Machine learning models that make them susceptible to attacks. The attacks are inflicted by carefully designing a perturbed input which appears benign, but fools the models to perform in unexpected ways. To date, most work in adversarial attacks and defenses has been done for classification models. However, generative models are susceptible to attacks as well, and thus warrant attention. We study some attacks for generative models like Autoencoders and Variational Autoencoders. We discuss the relative effectiveness of the attack methods, and explore some simple defense methods against the attacks.
Issue Date:2019-04-26
Type:Text
URI:http://hdl.handle.net/2142/104943
Rights Information:Copyright 2019 Rishika Agarwal
Date Available in IDEALS:2019-08-23
Date Deposited:2019-05


This item appears in the following Collection(s)

Item Statistics