Withdraw
Loading…
Designing and evaluating a fully interpretable neural network for learner behavior detection
Pinto, Juan D.
Loading…
Permalink
https://hdl.handle.net/2142/132554
Description
- Title
- Designing and evaluating a fully interpretable neural network for learner behavior detection
- Author(s)
- Pinto, Juan D.
- Issue Date
- 2025-12-04
- Director of Research (if dissertation) or Advisor (if thesis)
- Paquette, Luc
- Doctoral Committee Chair(s)
- Paquette, Luc
- Committee Member(s)
- Lane, H C
- Bosch, Philip N
- Tanchuk, Nicolas J
- Department of Study
- Curriculum and Instruction
- Discipline
- Curriculum and Instruction
- Degree Granting Institution
- University of Illinois Urbana-Champaign
- Degree Name
- Ph.D.
- Degree Level
- Dissertation
- Keyword(s)
- educational data mining
- learning analytics
- machine learning
- explainable AI
- artificial intelligence
- educational technology
- Abstract
- The increasing complexity of machine learning models in education has created a "challenge of interpretability," where opaque decision-making processes risk undermining fairness, accountability, and trust, among other factors. This dissertation confronts this challenge by proposing and validating an alternative paradigm: developing neural networks that are interpretable by design. Through a series of three interconnected studies, this work demonstrates an end-to-end methodology for creating, evaluating, and validating a fully interpretable model for learner behavior detection. The first study details the design of a novel, constraints-based convolutional neural network for identifying gaming-the-system behavior. By engineering the model's architecture and training process, its convolutional filters are made to function as explicit, human-readable behavioral patterns, ensuring that the evidence for its predictions has full explanatory potential and is directly tied to its inference process. The second study presents a human-grounded evaluation to rigorously assess the model's explainability. The results demonstrate that education researchers, regardless of their machine learning expertise, could use the model's explanations to accurately predict its outputs and identify how to alter them. This provides strong evidence that the explanations are both faithful to the model's internal logic and intelligible to human users. The third study validates the knowledge captured by the model through an interview with a subject-matter expert. The expert confirmed that the majority of the patterns learned by the model were valid indicators of gaming-the-system behavior, despite not being included in a cognitive model previously created by an expert. This highlights the potential of interpretable models to serve not only as predictive tools but also as instruments for knowledge discovery. Taken together, these studies offer a proof-of-concept for a methodology that moves beyond the prevailing "black-box" paradigm. By shifting the focus from post-hoc explanations to interpretable-by-design architectures, this dissertation provides a framework for building more transparent, trustworthy, and insightful AI in education.
- Graduation Semester
- 2025-12
- Type of Resource
- Thesis
- Handle URL
- https://hdl.handle.net/2142/132554
- Copyright and License Information
- Copyright 2025 Juan Pinto
Owning Collections
Graduate Dissertations and Theses at Illinois PRIMARY
Graduate Theses and Dissertations at IllinoisManage Files
Loading…
Edit Collection Membership
Loading…
Edit Metadata
Loading…
Edit Properties
Loading…
Embargoes
Loading…