Withdraw
Loading…
Safe and adaptive reinforcement learning for robotics applications
Cheng, Yikun
This item's files can only be accessed by the System Administrators group.
Permalink
https://hdl.handle.net/2142/132764
Description
- Title
- Safe and adaptive reinforcement learning for robotics applications
- Author(s)
- Cheng, Yikun
- Issue Date
- 2025-11-21
- Director of Research (if dissertation) or Advisor (if thesis)
- Hovakimyan, Naira
- Doctoral Committee Chair(s)
- Hovakimyan, Naira
- Committee Member(s)
- Salapaka, Srinivasa M
- Stipanovic, Dusan M
- Zhao, Pan
- Department of Study
- Mechanical Sci & Engineering
- Discipline
- Mechanical Engineering
- Degree Granting Institution
- University of Illinois Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Adaptive Control, Safe Reinforcement Learning, Control Barrier Function, Barrier Funciton
- Abstract
- In recent years, learning-based control methods, especially those leveraging the power of reinforcement learning (RL) and deep learning, have demonstrated impressive performance in complex robotics control tasks. However, they often suffer from the lack of safety and robustness guarantees, which makes it challenging to apply them to safety-critical systems in dynamical environments involving various uncertainties and disturbances. This Ph.D. thesis aims to integrate control-theoretical methods to develop learning-based control architectures with enhanced robustness and stability guarantees and validate their efficiency through real-world robotics applications. First, it introduces a method to rapidly adapt RL policies in the presence of environmental perturbations via L1 adaptive control, which acts as an add-on module to directly estimate and cancel the uncertainties (within the bandwidth of the control channel) induced by the environmental perturbations. Second, we design a safe and efficient RL algorithm using disturbance estimator-based control barrier functions (CBF), which can be used as a safety filter for any model-free RL method. Unlike most existing safe RL methods that address model uncertainty through model learning which requires the collection of enough data to achieve good performance, our method leverages disturbance estimators to accurately estimate the value of uncertainty from the beginning, which is then incorporated into a robust CBF condition to generate safe actions. Finally, we present a comprehensive safe and adaptive learning-based control framework for reinforcement learning, referred to as SARL (Safe and Adaptive Reinforcement Learning), which enables RL-controlled robotic systems to operate safely and effectively in uncertain environments. This framework provides an add-on control architecture that can adapt RL policies to a perturbed environment to improve the control performance while avoiding safety violations. To experimentally validate SARL’s efficacy, we apply it to autonomous and precise drone landing on moving platforms with significant disturbances and unmodeled dynamics.
- Graduation Semester
- 2025-12
- Type of Resource
- Thesis
- Handle URL
- https://hdl.handle.net/2142/132764
- Copyright and License Information
- Copyright 2025 Yikun Cheng
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…