Files in this item



application/pdfVIJITBENJARONK-THESIS-2020.pdf (338kB)Restricted to U of Illinois
(no description provided)PDF


Title:Provable stability defenses for targeted data poisoning
Author(s):Vijitbenjaronk, Warut D.
Advisor(s):Koyejo, Oluwasanmi
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):data poisoning
machine learning
robust machine learning
stable support vector machine
algorithmic stability
uniform stability
adversarial machine learning
Abstract:Modern machine learning systems are often trained on massive, crowdsourced datasets. Due to the impossibility of checking this data, these systems may be susceptible to data poisoning attacks where malicious users inject false training data in order to influence the learned model. While recent work has focused primarily on the untargeted case, where the attacker's goal is to increase overall error, much less is understood about the theoretical underpinnings of targeted data poisoning attacks. These attacks try to cause the learned model to change its prediction on only a few targeted examples without raising suspicion. We suggest algorithmic stability as a sufficient condition for robustness against data poisoning, construct upper bounds on the possible effectiveness of data poisoning attacks against stable algorithms, and propose an algorithm that provides resilience against popular classes of attacks. Empirically, we report findings on the MNIST 1-7 image classification dataset and the TREC 2007 spam detection dataset that confirms our theoretical findings.
Issue Date:2020-07-22
Rights Information:Copyright 2020 Warut D. Vijitbenjaronk
Date Available in IDEALS:2020-10-07
Date Deposited:2020-08

This item appears in the following Collection(s)

Item Statistics