Withdraw
Loading…
Human factors in secure and non-abusive machine learning systems
Mink, Jaron Maurice
Loading…
Permalink
https://hdl.handle.net/2142/125593
Description
- Title
- Human factors in secure and non-abusive machine learning systems
- Author(s)
- Mink, Jaron Maurice
- Issue Date
- 2024-07-10
- Director of Research (if dissertation) or Advisor (if thesis)
- Wang, Gang
- Doctoral Committee Chair(s)
- Wang, Gang
- Committee Member(s)
- Redmiles, Elissa M
- Gunter, Carl
- Cobb, Camille
- Department of Study
- Siebel Computing &DataScience
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois at Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- machine learning
- deepfake
- security
- privacy
- adversarial machine learning
- trustworthy machine learning
- usable security
- human factors
- secure machine learning
- non-abusive machine learning
- human-ai interaction
- human-centric machine learning
- human-centric AI
- human-ml interaction
- Abstract
- Today, a significant portion of mission-critical work traditionally done by humans (e.g., driving cars, approving loans, medical triaging) is on the verge of being replaced by machine learning (ML). Historically, not considering human interactions with conventional software systems has led to significant harm. This is even more true for emerging ML systems as there is a lack of principled methods to construct safe and secure human-ML interaction paradigms. To prevent similar harm in ML-based systems, it is paramount that we understand vulnerabilities and apply safeguards now, while they are being designed and deployed. This dissertation investigates how the interaction of human factors and ML systems results in security implications via two perspectives. Specifically, this dissertation investigates how human factors can be exploited by ML-enabled abuse to reduce security in the context of deepfake deception (Chapter 3 and Chapter 4) and harnessed to improve the security of ML systems in the context of ML-enabled analysts tools (Chapter 5), and application of adversarial ML defenses (Chapter 6). In summary, these works show how human factors and perspectives contribute to the security of ML systems and that accounting for such interaction is necessary to protect from adversarial exploits.
- Graduation Semester
- 2024-08
- Type of Resource
- Thesis
- Handle URL
- https://hdl.handle.net/2142/125593
- Copyright and License Information
- Copyright 2024 Jaron Mink
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…