Files in this item

FilesDescriptionFormat

application/pdf

application/pdfWANG-DISSERTATION-2016.pdf (5MB)
(no description provided)PDF

Description

Title:Some theoretical and applied developments to support cognitive learning and adaptive testing
Author(s):Wang, Shiyu
Director of Research:Douglas, Jeff A.
Doctoral Committee Chair(s):Douglas, Jeff A.
Doctoral Committee Member(s):Chang, Hua-Hua; Fellouris, Georgios; Culpepper, Steven A.; Zhang, Jinming
Department / Program:Statistics
Discipline:Statistics
Degree Granting Institution:University of Illinois at Urbana-Champaign
Degree:Ph.D.
Genre:Dissertation
Subject(s):Cognitive diagnosis
Model Misspeci cation
Maximum Likelihood, Robust Estimation
large sample theory
Computerized Adaptive Testing
Item Response Theory
Martingale Limit Theory
Nominal Response Model
Response Revision
Sequential Design
Abstract:Cognitive diagnostic Modeling (CDM) and Computerized Adaptive Testing (CAT) are useful tools to measure subjects' latent abilities from two different aspects. CDM plays a very important role in the fine-grained assessment, where the primary purpose is to accurately classify subjects according to the skills or attributes they possess, while CAT is a useful tool for coarse-grained assessment, which provides a single number to indicate the student's overall ability. This thesis discusses and solves several theoretical and applied issues related to these two areas. The first problem we investigate related to a nonparametric classifier in Cognitive Diagnosis. Latent Class models for cognitive diagnosis have been developed to classify examinees into one of the 2K attribute profiles arising from a K-dimensional vector of binary skill indicators. These models recognize that response patterns tend to deviate from the ideal responses that would arise if skills and items generated item responses through a purely deterministic conjunctive process. An alternative to employing these latent class models is to minimize the distance between observed item response patterns and ideal response patterns, in a nonparametric fashion that utilizes no stochastic terms for these deviations. Theorems are presented that show the consistency of this approach, when the true model is one of several common latent class models for cognitive diagnosis. Consistency of classification is independent of sample size, because no model parameters need to be estimated. Simultaneous consistency for a large group of subjects can also be shown given some conditions on how sample size and test length grow with one another. The second issue we consider is still within CDM framework, however our focus is about the model misspecification. The maximum likelihood classification rule is a standard method to classify examinee attribute profiles in cognitive diagnosis models. Its asymptotic behavior is well understood when the model is assumed to be correct, but has not been explored in the case of misspecified latent class models. We investigate the consequences of using a simple model when the true model is different. In general, when a CDM is misspecified as a conjunctive model, the MLE for attribute profiles is not necessarily consistent. A sufficient condition for the MLE to be a consistent estimator under a misspecified DINA model is found. The true model can be any conjunctive models or even a compensatory model. Two examples are provided to show the consistency and inconsistency of the MLE under a misspecified DINA model. A Robust DINA MLE technique is proposed to overcome the inconsistency issue, and theorems are presented to show that it is a consistent estimator for attribute profile as long as the true model is a conjunctive model. Simulation results indicate that when the true model is a conjunctive model, the Robust DINA MLE and the DINA MLE based on the simulated item parameters can result in relatively good classification results even when the test length is short. These findings demonstrate that simple models can be fitted without severely affecting classification accuracy in some cases. The last one discusses and solves a controversial issue related to CAT. In Computerized Adaptive Testing (CAT), items are selected in real time and are adjusted to the test-taker's ability. A long debated question related to CAT is that they do not allow test-takers to review and revise their responses. The last chapter of this thesis presents a CAT design that preserves the efficiency of a conventional CAT, but allows test takers to revise their previous answers at any time during the test, and the only imposed restriction is on the number of revisions to the same item. The proposed method relies on a polytomous Item Response Theory model that is used to describe the first response to each item, as well as any subsequent revisions to it. The test-taker's ability is updated on-line with the maximizer of a partial likelihood function. I have established the strong consistency and asymptotic normality of the final ability estimator under minimal conditions on the test-taker's revision behavior. Simulation results also indicated this proposed design can reduce measurement error and is robust against several well-known test-taking strategies.
Issue Date:2016-03-14
Type:Thesis
URI:http://hdl.handle.net/2142/90484
Rights Information:Copyright 2016 Shiyu Wang
Date Available in IDEALS:2016-07-07
Date Deposited:2016-05


This item appears in the following Collection(s)

Item Statistics