|Abstract:||Neural networks recently have been used to solve many real-world tasks such as image recognition and can achieve high effectiveness on these tasks. Despite being popularly used in many applications, neural network models have been found to be vulnerable to adversarial examples, i.e., carefully crafted examples aiming to mislead machine learning models. Adversarial examples can pose potential risks on safety/security-critical applications. Existing defense approaches are still vulnerable to emerging attacks, especially in a white-box attack scenario. In this thesis, we focus on mitigating the adversarial attacks by improving machine learning models to be more robust against those attacks.
In particular, we propose a new defense approach, named MulDef, based on robustness diversity. Our approach consists of (1) a general defense framework based on diverse models and (2) a technique for generating diverse models to achieve high defense capability. Our framework generates multiple models (constructed from the target model) to form a model family. The model family is designed to achieve robustness diversity (i.e., an adversarial example crafted to attack one model may not succeed in attacking other models in the family). At runtime, a model is randomly selected from the family to process each input example. Our evaluation results show that MulDef (with only up to 5 models in the family) can substantially improve the target model's robustness against adversarial examples by 19-78% in a white-box attack scenario among MNIST, CIFAR-10, and Tiny ImageNet datasets, while maintaining similar accuracy on legitimate examples. Our general framework can also inspire rich future research to construct a desirable model family achieving higher robustness diversity.