Withdraw
Loading…
Learning-based control system design: theory and applications
Zhang, Xiangyuan
Loading…
Permalink
https://hdl.handle.net/2142/132463
Description
- Title
- Learning-based control system design: theory and applications
- Author(s)
- Zhang, Xiangyuan
- Issue Date
- 2025-09-11
- Director of Research (if dissertation) or Advisor (if thesis)
- Başar, Tamer
- Doctoral Committee Chair(s)
- Başar, Tamer
- Committee Member(s)
- Srikant, Rayadurgam
- Dullerud, Geir
- Mitra, Sayan
- Department of Study
- Electrical & Computer Eng
- Discipline
- Electrical & Computer Engr
- Degree Granting Institution
- University of Illinois Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Control systems
- reinforcement learning
- dynamical systems
- optimization
- Abstract
- Reinforcement learning (RL) offers a versatile, data-driven framework for feedback controller synthesis applicable to a wide range of dynamical systems. Its adaptability makes RL suitable for large-scale, complex control applications where environments change rapidly and precise symbolic modeling is impractical. Despite this potential, the deployment of RL in real-world control systems remains limited due to the catastrophic risks associated with control failures. This dissertation advances the application of RL for control by establishing its theoretical foundation and demonstrating its practical capabilities. The first half of the dissertation develops model-free policy gradient (PG) algorithms with proven efficacy and efficiency for addressing fundamental benchmarks in control theory. These include state-feedback linear-quadratic regulator in Chapter 2, two-player zero-sum linear-quadratic dynamic game and H-infinity robust control in Chapter 3, Kalman filtering and output-feedback linear-quadratic-Gaussian control in Chapter 4, and terminal-state minimax estimation in Chapter 5. The central theme across these works is the development of control-specific RL algorithms, rather than the analysis of generic, out-of-the-box RL methods. Inspired by the strong mathematical foundations of model-based solvers that underpin traditional control theory, our approach leverages the rich structural properties of each control task. This methodology bridges the gap between model-based and RL-based control theories, enabling strong performance guarantees for data-driven controllers. To advance RL-based controllers toward reliable real-world deployment, the second half of this dissertation focuses on practical learning-based control system designs. This effort begins in Chapter 6 with Controlgym, an open-source benchmark of large-scale, safety-critical control applications designed for the rigorous evaluation of RL algorithms on metrics such as stability, robustness, efficiency, and scalability. Leveraging this testbed, we develop two distinct control architectures. Chapter 7 proposes a hybrid control architecture for nonlinear partial differential equations, where a controller derived from a data-driven surrogate model is used to warm-start a model-free policy optimization stage. This fine-tuning step compensates for errors in the surrogate model, improving control performance while maintaining high computational efficiency. Chapter 8 introduces a Decision Transformer, which reframes the control problem as a sequence prediction task. The Decision Transformer architecture demonstrates notable zero-shot generalization and rapid adaptation to new control tasks with minimal data. The dissertation concludes in Chapter 9 with a summary of findings and a discussion of future research directions.
- Graduation Semester
- 2025-12
- Type of Resource
- Thesis
- Handle URL
- https://hdl.handle.net/2142/132463
- Copyright and License Information
- Copyright 2025 Xiangyuan Zhang
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…