The advent of modern artificial intelligence has given birth to new security concerns surrounding data uploaded to the Internet. Images uploaded to public sites like social media can be used for malicious purposes, and a user has no recourse to prevent this from happening after an image has been uploaded. However, previous work on adversarial attacks to protect user privacy have focused on one specific attack scenario. This means that a user is vulnerable to every application besides the one that they have protected themselves against. This work explores possibility of hybrid attacks - attacks designed to work on more than one model task in order to protect users from a variety of different concerns. We focus specifically on facial recognition and AI powered image editing model tasks. We first test different attacks in order to find ones that work well on our chosen models. We then test the cross application of these attacks, and confirm that a attack meant for facial recognition models will not significantly and repeatably impact the performance of image editing models, and vice versa. Finally, we test the performance of layered attacks, where the noise produced from attacks on different models are combined and applied to one image. We find that these hybrid attacks significantly outperform single model attacks in terms of performance on both models and present a viable method of protecting images from more than one model task.
Use this login method if you
don't
have an
@illinois.edu
email address.
(Oops, I do have one)
IDEALS migrated to a new platform on June 23, 2022. If you created
your account prior to this date, you will have to reset your password
using the forgot-password link below.