Files in this item



application/pdfECE499-Sp2019-vijitbenjaronk.pdf (303kB)Restricted to U of Illinois
(no description provided)PDF


Title:Certified adversarial robustness via randomized discretization
Author(s):Vijitbenjaronk, W. Duke
Contributor(s):Telgarsky, Matus
Subject(s):adversarial robustness
randomized discretion
accuracy of deep learning models
Abstract:Modern machine learning algorithms are able to reach an astonishingly high level of performance in a variety of useful tasks. However, small adversarial perturbations have been shown to drastically reduce the accuracy of deep learning models not specifically trained to resist them. This problem is of practical significance due to security concerns about models deployed in industry, and of theoretical significance due to the connections between this problem and the underlying themes of optimization and generalization. In this paper, we propose and analyze a simple and computationally efficient defense against adversarial attacks based on randomized discretization to a relatively small set of points that is agnostic of the underlying classifier. We show that that this strategy leads to a lower bound on the classification accuracy using tools from computational geometry and information theory. Unlike prior work, the proposed strategy allows for easily estimable data-dependent accuracy guarantees at inference time, and demonstrates a weaker dependence on the dimensionality of its inputs.
Issue Date:2019-05
Date Available in IDEALS:2019-06-17

This item appears in the following Collection(s)

Item Statistics