Files in this item

FilesDescriptionFormat

application/pdf

application/pdfSRISAKAOKUL-THESIS-2020.pdf (710kB)
(no description provided)PDF

Description

Title:Multi-model-based defense against adversarial examples for neural networks
Author(s):Srisakaokul, Siwakorn
Advisor(s):Xie, Tao; Li, Bo
Department / Program:Computer Science
Discipline:Computer Science
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:M.S.
Genre:Thesis
Subject(s):security and privacy, machine learning
Abstract:Neural networks recently have been used to solve many real-world tasks such as image recognition and can achieve high effectiveness on these tasks. Despite being popularly used in many applications, neural network models have been found to be vulnerable to adversarial examples, i.e., carefully crafted examples aiming to mislead machine learning models. Adversarial examples can pose potential risks on safety/security-critical applications. Existing defense approaches are still vulnerable to emerging attacks, especially in a white-box attack scenario. In this thesis, we focus on mitigating the adversarial attacks by improving machine learning models to be more robust against those attacks. In particular, we propose a new defense approach, named MulDef, based on robustness diversity. Our approach consists of (1) a general defense framework based on diverse models and (2) a technique for generating diverse models to achieve high defense capability. Our framework generates multiple models (constructed from the target model) to form a model family. The model family is designed to achieve robustness diversity (i.e., an adversarial example crafted to attack one model may not succeed in attacking other models in the family). At runtime, a model is randomly selected from the family to process each input example. Our evaluation results show that MulDef (with only up to 5 models in the family) can substantially improve the target model's robustness against adversarial examples by 19-78% in a white-box attack scenario among MNIST, CIFAR-10, and Tiny ImageNet datasets, while maintaining similar accuracy on legitimate examples. Our general framework can also inspire rich future research to construct a desirable model family achieving higher robustness diversity.
Issue Date:2020-05-11
Type:Thesis
URI:http://hdl.handle.net/2142/108026
Rights Information:Copyright 2020 Siwakorn Srisakaokul
Date Available in IDEALS:2020-08-26
Date Deposited:2020-05


This item appears in the following Collection(s)

Item Statistics