Withdraw
Loading…
Local and diverse explanations for autonomous systems
Brindise, Noel Christine
Loading…
Permalink
https://hdl.handle.net/2142/129446
Description
- Title
- Local and diverse explanations for autonomous systems
- Author(s)
- Brindise, Noel Christine
- Issue Date
- 2025-04-28
- Director of Research (if dissertation) or Advisor (if thesis)
- Langbort, Cedric
- Doctoral Committee Chair(s)
- Langbort, Cedric
- Committee Member(s)
- Driggs-Campbell, Katherine
- Gremillion, Gregory
- Mitra, Sayan
- Ornik, Melkior
- Department of Study
- Aerospace Engineering
- Discipline
- Aerospace Engineering
- Degree Granting Institution
- University of Illinois Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- Explainable Autonomy
- Explainable AI
- Reinforcement Learning
- Linear Temporal Logic
- Markov Decision Process
- Abstract
- Recent advancements in artificial intelligence and autonomous systems have yielded impressive applications, from robots to self-driving cars to predictive modeling. However, these systems are often difficult for a human to understand and rarely offer explanations of their own behaviors, intentions, decisions, or predictions. This disconnect poses a problem for efficacy, efficiency, and ethics. Transparency and trustworthiness are key priorities for autonomy in sensitive settings such as healthcare or military domains. Moreover, poorly-understood systems are difficult to optimize; autonomous agent training is notoriously dependent on the designer's experience and ad hoc trial-and-error. Even a well-trained agent may be ignored during cooperative tasks if the human user is unsure of its intent or reasoning. In summary, the benefits of autonomy are limited by its opacity. In response, several fields have emerged which focus on explanation methods, though many perspectives remain unexplored. This thesis considers explainability specifically for autonomous planning, a topic at the intersection of Explainable AI (xAI), Explainable AI Planning (XAIP), and Explainable Reinforcement Learning (XRL). Particularly in autonomous planning, these fields have yet to establish systematic goals or standards, which has allowed the role and form of 'explanation' to vary widely. This thesis begins by introducing novel notions of 'pointwise-in-trajectory' and 'alternatives-based' explanation, two perspectives which are largely lacking from the eclectic landscape of current XRL and XAIP. The main contributions of this work are three explainability methods incorporating novel pointwise-in-trajectory and alternatives-based perspectives. The first proposal is Rule Status Assessment (RSA), an algorithm for post-hoc trajectory diagnostics. RSA analyzes system trajectories using an adapted concept of Linear Temporal Logic (LTL) specifications, describing any point in a trajectory in terms of an LTL-based 'status.' The second proposal is Diverse Near-Optimal Alternatives (DNA), which applies to value-based Reinforcement Learning settings. DNA provides explanations by seeking a set of policy 'options' which are cost-effective and generate distinct trajectories. The last proposal is Live LTL Progress Tracking (LPT), a second LTL-based framework which describes progress towards a goal 'live' as a trajectory is created. In all, these methods are shown to be promising at both the development and deployment stages for autonomous systems, taking Reinforcement Learning agents as a primary application. Several uses for these methods are demonstrated in simulated environments, with potential for wide-ranging applications.
- Graduation Semester
- 2025-05
- Type of Resource
- Thesis
- Handle URL
- https://hdl.handle.net/2142/129446
- Copyright and License Information
- Copyright 2025 Noel Christine Brindise
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…