Files in this item



application/pdfLI-DISSERTATION-2020.pdf (938kB)
(no description provided)PDF


Title:Data-driven adaptive learning systems
Author(s):Li, Xiao
Director of Research:Zhang, Jinming
Doctoral Committee Chair(s):Zhang, Jinming
Doctoral Committee Member(s):Chang, Hua-hua; Anderson, Carolyn Jane; Kern, Justin Louis
Department / Program:Educational Psychology
Discipline:Educational Psychology
Degree Granting Institution:University of Illinois at Urbana-Champaign
Subject(s):Adaptive learning system
Abstract:Adaptive learning systems are capable of providing more adaptive and efficient assessment and learning experiences for learners than traditional classroom settings. A conventional adaptive learning system involves a learner, a latent trait estimator, and a learning strategy/plan. The latent trait estimator measures the learner's latent traits from his/her responses to the test, where computerized adaptive testing (CAT) or computerized classification testing (CCT) tailors test items to learners' abilities so as to give a more efficient latent trait estimation. On the other hand, the learning plan (called policy) is another key component of such systems. It is the algorithm that designs the learning paths, or in other words, selects learning materials for learners based on the information such as the learners' current progresses and skills, learning material contents. In this thesis, we discuss and address issues related to the adaptive test and learning problems using data-driven methods. In the first chapter, we discuss the challenge of content balancing in variable-length adaptive tests and propose feasible data-driven methods. Content balancing is one of the most important issues in CCT. To adapt to variable-length forms, special treatments are needed to successfully control content constraints without knowledge of the test length during the test. To this end, we propose the concept of "look ahead" and "step size" to adaptively control content constraints in each item selection step. The step size gives a prediction of the number of items to be selected at the current stage, that is, how far we will look ahead. Two look-ahead content balancing (LA-CB) methods, one with a constant step size and another with an adaptive step size, are proposed as feasible solutions to balancing content areas in variable-length computerized classification testing (VL-CCT). The proposed LA-CB methods are compared with conventional item selection methods in variable-length tests under different classification methods' settings. Simulation results show that integrated with heuristic item selection methods, the proposed LA-CB methods outperform the conventional item selection methods with fewer constraint violations and higher classification accuracy. The second issue we address is to find the learning policy that designs the optimal learning path in an adaptive learning system under hierarchical skill structures. To this end, we first develop a model for learners' hierarchical skills in the adaptive learning system. Based on the hierarchical skill model and the classical cognitive diagnosis model, we further develop a framework to model various levels of proficiency related to hierarchical skills. The optimal learning policy in consideration of the hierarchical structure of skills is found by applying a data-driven algorithm-reinforcement learning method, which does not require information about learners' learning transition processes. The effectiveness of the proposed framework is demonstrated via simulation studies. Lastly, we solve the problem of finding a learning policy assuming latent traits to be continuous with an unknown transition model. We formulate the adaptive learning problem as a Markov decision process (MDP). We apply a model-free deep reinforcement learning algorithm---the deep Q-learning algorithm---that is data-driven and can effectively find the optimal learning policy from data on learners' learning process without knowing the actual transition model of the learner's continuous latent traits. To efficiently utilize available data, we further develop a transition model estimator that emulates the learner's learning process using neural networks. The transition model estimator can be used in the deep Q-learning algorithm so that it can more efficiently discover the optimal learning policy for a learner. Numerical simulation studies verify that the proposed algorithm is very efficient in finding a good learning policy, especially with the aid of a transition model estimator, it can find the optimal learning policy after training using a small number of learners.
Issue Date:2020-04-14
Rights Information:Copyright 2020 Xiao Li
Date Available in IDEALS:2020-08-26
Date Deposited:2020-05

This item appears in the following Collection(s)

Item Statistics