Withdraw
Loading…
Domain transferability, model robustness and data privacy in modern machine learning systems
Ma, Evelyn
This item's files can only be accessed by the System Administrators group.
Permalink
https://hdl.handle.net/2142/129712
Description
- Title
- Domain transferability, model robustness and data privacy in modern machine learning systems
- Author(s)
- Ma, Evelyn
- Issue Date
- 2025-04-27
- Director of Research (if dissertation) or Advisor (if thesis)
- Milenkovic, Olgica
- Doctoral Committee Chair(s)
- Etesami, Rasoul
- Committee Member(s)
- Li, Yingying
- Wang, Qiong
- Dong, Roy
- Department of Study
- Industrial&Enterprise Sys Eng
- Discipline
- Industrial Engineering
- Degree Granting Institution
- University of Illinois Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Model Robustness, Domain Transferability
- Abstract
- Developing secure, adaptable, and trustworthy machine learning (ML) systems necessitates a deep understanding of the interactions between Domain Transferability, Model Robustness, and Data Privacy. Despite the individual importance of these factors, their interrelationships remain underexplored, and empirical studies often reveal conflicting trade-offs. For example, techniques that enhance Data Privacy frequently compromise Model Robustness and Domain Transferability. To address these challenges, we systematically investigate these interconnections through three studies, offering both theoretical insights and practical solutions. Our first study challenges the prevailing assumption that Model Robustness inherently enhances Domain Transferability. Our theoretical framework demonstrates that regularization on the model feature extractor, rather than model robustness, is a more fundamental sufficient condition for relative domain transferability. Empirically, we provide counterexamples showing that robustness and generalization can be negatively correlated across different datasets, challenging the prior intuition that more robust models always generalize better. In our second study, we improve Domain Transferability while preserving Data Privacy by proposing FedGTST, a novel transferable federated learning (FL) algorithm. Our theoretical analysis establishes that increasing the averaged Jacobian norm across clients while reducing Jacobian variance provides tight control over the target loss, thereby improving transferability in federated learning. Empirically, FedGTST achieves superior performance over existing baselines, such as FedSR, on public benchmarks, demonstrating its efficacy in enhancing global knowledge transfer. Our third study highlights the often-overlooked risks to Model Robustness in privacy-preserving frameworks by developing data poisoning techniques tailored for FL in reinforcement learning tasks. Our study reveals that Federated Reinforcement Learning (FRL) is highly susceptible to poisoning attacks, inheriting security risks from both FL and RL. We introduce a general poisoning framework that formulates FRL poisoning as an optimization problem and propose a poisoning protocol tailored for policy-based FRL. Empirically, our approach consistently degrades FRL performance across diverse OpenAI Gym environments, outperforming baseline poisoning methods and highlighting critical security challenges in FRL training. Finally, our last work maintains the degraded Model Robustness and Domain Transferability in privacy-preserving frameworks for Large Language Models (LLMs). Specifically, we design GUARD, an LLM unlearning mechanism driven by data attribution to mitigate the robustness degradation associated with unlearning data points that have a high influence on performance over the retained dataset. Our experiments demonstrate that GUARD effectively mitigates unintended forgetting across various LLM unlearning baselines and architectures, highlighting its ability to selectively unlearn requested samples without significantly compromising essential knowledge. Collectively, our research advances the theoretical understanding and practical implementation of robust, transferable, and privacy-preserving ML systems, with applications in federated learning, reinforcement learning, computer vision, and large language models.
- Graduation Semester
- 2025-05
- Type of Resource
- Thesis
- Handle URL
- https://hdl.handle.net/2142/129712
- Copyright and License Information
- Copyright © Evelyn Ma, 2025 All rights reserved.
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…