Withdraw
Loading…
A data-driven method for improving a black-box controller
Gisi, Alex
Loading…
Permalink
https://hdl.handle.net/2142/129310
Description
- Title
- A data-driven method for improving a black-box controller
- Author(s)
- Gisi, Alex
- Issue Date
- 2025-05-05
- Director of Research (if dissertation) or Advisor (if thesis)
- Norris, William R
- Department of Study
- Electrical & Computer Eng
- Discipline
- Electrical & Computer Engr
- Degree Granting Institution
- University of Illinois Urbana-Champaign
- Degree Name
- M.S.
- Degree Level
- Thesis
- Keyword(s)
- Machine learning
- Robotics
- Abstract
- Optimal control design is an important engineering task. The optimality of a controller is measured by how well closed-loop system trajectories under that controller satisfy a given measure of performance. If a controller is sub-optimal with respect to the performance measure, it would be beneficial to re-design it; however, controller re-design can be expensive. Therefore, it is desirable to alter the closed-loop system performance without touching the baseline controller, that is, by treating it as a black-box. This thesis proposes a method for doing so based on data collected by observing closed-loop trajectories under the baseline controller. The method is based on gain scheduling, or multiplicative modulation of the baseline control signal. A gain scheduling policy describes how and when to apply gains to alter the baseline signal. A gain scheduling policy parameterization and training algorithm for automatically improving the black-box baseline controller is proposed. It is shown that for the proposed policy parameterization, the training can be made more efficient by applying an alternating optimization technique. The resulting gain scheduling policy and training algorithm were applied to three control systems with distinct qualities: a linear time-invariant system, an inverted pendulum, and a skid-steer mobile robot simulation. Additionally, a combined powertrain and kinematic model is developed to implement the mobile robot simulation. To perform realistic evaluations, both linear-quadratic regulator and reinforcement learning-trained controllers are used in the method evaluation. Furthermore, two other learning algorithms beside the one proposed are used to give the results context. It is shown that the proposed method can effectively improve the output of either baseline controller, although certain situations cause worsened performance. The practical implications of these results are examined using examples.
- Graduation Semester
- 2025-05
- Type of Resource
- Thesis
- Handle URL
- https://hdl.handle.net/2142/129310
- Copyright and License Information
- Copyright 2025 Alex Gisi
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…