Withdraw
Loading…
Domain generalization for sequential data via invariant subspace recovery
Sharma, Ashutosh
Loading…
Permalink
https://hdl.handle.net/2142/129294
Description
- Title
- Domain generalization for sequential data via invariant subspace recovery
- Author(s)
- Sharma, Ashutosh
- Issue Date
- 2025-05-05
- Director of Research (if dissertation) or Advisor (if thesis)
- Zhao, Han
- Department of Study
- Siebel School Comp & Data Sci
- Discipline
- Computer Science
- Degree Granting Institution
- University of Illinois Urbana-Champaign
- Degree Name
- M.S.
- Degree Level
- Thesis
- Keyword(s)
- Domain Generalization
- Kalman Filter, Invariant Subspace Recovery, Invariant Features, Sequential Data
- Abstract
- Recent works have explored continuous and discrete temporal domain generalization, which given input data from multiple temporally indexed domains, aims to train a model generalized across time. State of the art systems like DRAIN [1] model the temporal evolution of the input domain and the model dynamics jointly for training optimal predictors, but these approaches cannot generalize to multiple categorically indexed domains with temporally evolving data. In another line of work, ISR [2] trains invariant predictors for a set of categorical domains assuming i.i.d sampled observed data points for recovering the invariant subspace, but this cannot be applied directly to sequentially sampled data. In this work, we propose leveraging subspace recovery techniques for training invariant predictors over temporally evolving data. We formalize the data generation model as a Dynamic Bayesian Network where latent representation at any time causally depend only on the current time’s label & last time’s latent variable. Assuming access to only the observations (and labels for training environments) for each environment, we first estimate the parameters of the model from training data via Expectation-Maximization. We then derive an optimal online predictor for the test environment which forms the strong baseline for our work. We then estimate invariant feature subspace from the latent variable distribution of the training data and project the observed features to this space. These invariant features are then used for generating invariant predictions. Similar to [3], we also propose a set of linear unit tests benchmark for this novel setting. Our experiments on the benchmark show that our strong baseline outperforms i.i.d. classifier by 17% and our invariant predictor further improves the accuracy by 3% over our strong baseline, validating the efficacy of our method.
- Graduation Semester
- 2025-05
- Type of Resource
- Thesis
- Handle URL
- https://hdl.handle.net/2142/129294
- Copyright and License Information
- Copyright 2025 Ashutosh Sharma
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…