Using adversarial noise for privacy protection: an evaluation of hybrid attacks across application targets
Ghosh, Anusha
This item's files can only be accessed by the System Administrators group.
Permalink
https://hdl.handle.net/2142/129762
Description
Title
Using adversarial noise for privacy protection: an evaluation of hybrid attacks across application targets
Author(s)
Ghosh, Anusha
Issue Date
2025-05-05
Director of Research (if dissertation) or Advisor (if thesis)
Wang, Gang
Department of Study
Siebel School Comp & Data Sci
Discipline
Computer Science
Degree Granting Institution
University of Illinois Urbana-Champaign
Degree Name
M.S.
Degree Level
Thesis
Keyword(s)
adversarial attacks
artificial intelligence
security
privacy
Language
eng
Abstract
The advent of modern artificial intelligence has given birth to new security concerns surrounding data uploaded to the Internet. Images uploaded to public sites like social media can be used for malicious purposes, and a user has no recourse to prevent this from happening after an image has been uploaded. However, previous work on adversarial attacks to protect user privacy have focused on one specific attack scenario. This means that a user is vulnerable to every application besides the one that they have protected themselves against. This work explores possibility of hybrid attacks - attacks designed to work on more than one model task in order to protect users from a variety of different concerns. We focus specifically on facial recognition and AI powered image editing model tasks. We first test different attacks in order to find ones that work well on our chosen models. We then test the cross application of these attacks, and confirm that a attack meant for facial recognition models will not significantly and repeatably impact the performance of image editing models, and vice versa. Finally, we test the performance of layered attacks, where the noise produced from attacks on different models are combined and applied to one image. We find that these hybrid attacks significantly outperform single model attacks in terms of performance on both models and present a viable method of protecting images from more than one model task.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.